[vc_row][vc_column][vc_column_text]

Saugat Roy (Kolkata, India), Chief Audiologist at Audient Hearing Solutions and student of SAERA (Master in Clinical Audiology and Hearing Therapy).

      In this study, Saugat Roy analyses in which ways the environmental noise affects our communication and how it can suppose a limitation for people with a hearing impairment. It has been established that individuals with sensorineural hearing loss (SNHL) demonstrate greater difficulty understanding speech in background noise than do normal hearing individuals under the same conditions.

      The High frequency Bengali speech identification test (HF-BSIT) was developed in blanga to meet the colloquial need of the population.

      Prior to utilizing the test on individuals with hearing impairment it is essential to get normative data. For this reason a speech perception in noise test is developed making use of high frequency words and restaurant noise obtaining norms on a sample of Bengali speaking adults. The noise is introduced in order to decrease the external redundancy. The effect of SNR and gender was studied. The material of the study was established using word subtest of HF-BSIT, developed by group of linguist & speech pathologists (2016) and restaurant noise. One track of the software program Cool Edit pro had a word list while another track had the restaurant noise. The test material was administered on 40 normal hearing adults. They were tested using three different SNR’s i.e., 0, 10 and 20 dB SNR.

      The results shown that speech-in noise tests would be useful in evaluating individuals with gradual hearing loss, who complain of auditory perception problem but do not demonstrate having a problem with routine speech tests. It also can be useful in in selecting amplification devices for those hearing impaired individuals with the gradual sloping hearing loss, which do not depict any difficulty in perceiving high frequency Words in a typical test situation where noise was not used.

[/vc_column_text][vc_toggle title=”Read the full article” style=”round” color=”blue” el_id=”1542376231501-e7dc4891-eeb7″]

ABSTRACT

For a normal auditory system, understanding speech in background noise is an extremely difficult task. In a quite situation, an individual may not display any difficulty in perceiving the words due to the presence of redundant cues. The scenario changes in day to day situation where there is an exposure to various composite noises. To assess this need, High frequency bengali speech identification test [HF-BSIT] was developed in bangla to meet the colloquial need of the population. This test will also be useful in selecting amplification devices for hearing impaired individuals with gradual sloping loss. The test material was standardized in 40 native bengali speaking adults (twenty males and twenty females) aged between 18-30 years. The results also highlighted the importance of signal to noise ratio and gender.

INTRODUCTION

Understanding speech in background noise is occasionally difficult for normal listeners. It has been demonstrated that sensorineural hearing impairment is connected with a loss of recognition in noise compared to normal hearing (Palva, 1955; Plomp, 1978). There is a small portion of people with normal hearing thresholds and speech recognition in quiet surroundings who have great difficulty in managing in an everyday noisy environment (dysacusis). Even though “speech in noise” tests do not provide any significant help in localizing a lesion in the hearing system (Jayaram, Baguley & Moffat, 1992), a fast and reliable test helps in predicting and assessing the benefit of amplification (Plomp, 1978; Dirks, Dubno & Morgan, 1984), in assessing job suitability and alleviating medico-legal work (Lutman, Brown & Coles, 1986).

Understanding speech in adverse conditions is an extremely important and challenging task for the human auditory system (Beattie, Barr & Roup, 1997). During daily conversation, most people possess the ability to “tune out” interfering noises that emanate from various directions, focusing instead on signals of interest. When adverse conditions disrupt speech perception, miscommunication is usually a temporary, albeit annoying, inconvenience because most conversations offer ample opportunity to repeat words or phrases not initially understood (Beattie, 1989). The ability to understand speech in noise depends upon multiple factors such as the characteristics of the speech signal, the signal-to-noise ratio, and the listener’s degree of hearing impairment. A routine hearing evaluation usually does not provide ample information about a listener’s functional communication abilities.

In everyday listening conditions, there is always some noise present. The listener, however also observes the speaker, which increases the perception of speech in noise (O’ Neil, 1954; Sumby & Pollack, 1954; Erber, 1969; Sanders & Goodrich, 1971; Ludvigsen, 1973). In such conditions, the speaker tries to compensate for the noise interference by raising the level of voice in order to keep the subjective loudness of speech in noise equal to the loudness of speech in quiet (Markides, 1986).

Daily communication requires the ability to understand speech in varying degrees of noise. Normal hearing individuals do not complain about understanding speech in quiet environments, but may have some difficulty with understanding speech in noisy environments (Wilson & Strouse, 1999). It has been established that individuals with sensorineural hearing loss (SNHL) demonstrate greater difficulty understanding speech in background noise than do normal hearing individuals under the same conditions (Dubno, Dirks & Morgan, 1984). Each one of these above mentioned variables interact and play a role in determining how well one understands speech in any given environment (Nilson, Soil & Sullivan, 1994). Listeners with identical word recognition abilities in quiet background can have significantly different word recognition abilities in noisy background (Beattie, Barr & Roup 1997; Wilson & Strouse, 1999).

Speech is a redundant auditory signal comprised of many bits of information (Martin, 1994). Similarly, our language structure has both extrinsic and intrinsic language redundancies. The extrinsic redundancies involve information obtained from phonemes and syntax. The listener possesses the intrinsic language redundancies based on their experiences with the language (Miller, Heise & Lichten, l951).The more extrinsic and intrinsic redundancies available, the easier it becomes to understand the speech signal (Miller, Heise & Lichten, 1951). Due to speech redundancy, normal-hearing individuals can understand the signal even though it may be highly degraded, as in a crowded restaurant (Wilson & Strouse, 1999).

The redundancy of the speech signal varies depending on whether one is listening to words in isolation, listening to sentences or participating in a conversation (Festen & Plomp, 1990). Generally, it is much easier to understand longer speech signals than short ones, even when the speech is embedded in background noise. Sentences are the easiest signal to understand as they provide the listener with acoustic information, semantic and contextual cues, and linguistic content. These signals provide greater redundancy. It is much easier to understand a conversation about a known subject, than single syllable words. Monosyllabic words, embedded in background noise, are the most difficult speech signal to comprehend. However, due to the increased redundancy and contextual cues in sentence materials, it becomes more difficult to determine whether the listener has perceived the entire sentence or has responded to a few key words that convey the meaning of the sentence (Wilson & Strouse, 1999).

Speech-in-noise testing has what some consider an advantage over many other central tests in that no specially recorded tapes or procedures are required for administration. Only a conventional audiometer and standardized recorded monosyllabic words lists are required for administration. The clinician with the more cavalier approach may even choose to use monitored live voice. It is this ease of administration which has caused-in-noise testing to be one of the most used, and perhaps misused, speech tests of Central Auditory Nervous System (CANS) function. Clinicians unaccustomed and unprepared in conducting central testing often resort to this test when faced with a patient with a possible CANS disorder. Unfortunately, the testing is frequently conducted without normative data and in many cases without knowledge of the actual SN ration (i.e. identical audiometer dial settings for the speech and noise signals do not guarantee a 0 dB SN ratio measured in sound pressure level (SPL)). A reasonable method for reducing the variability of speech-in-noise recognition measurements is to record the monosyl1ables and the white noise at the desired SN ratio on the same tape track. Additionally, by presenting both the speech and noise from a single recording, the second channel of the audiometer will be available for presenting contralateral masking, if necessary.

High frequency components above 8 Khz contribute to speech understanding in noise for subjects with normal hearing (Ramos de Miguel et al., 2015). Even the study of the effect of signal to noise ratio on speech perception ability of older adults showed reduced speech perception ability at low-mid thresholds, when the signal was decreased and the noise was increased (Shojael et.al., 2016).

Though speech-in-noise tests have primarily been used in testing for the presence of an auditory processing problem, it has also gained importance as a realistic test to determine the utility of hearing aids.

Sparseness and redundancy give rise to an account of speech perception in noise based on glimpsing. Many studies have demonstrated that a single competing talker or amplitude-modulated noise is a far less effective masker than multi speaker babble or speech-shaped noise (Festen & Plomp, 1990).

Need for the study

  • Individuals with high frequency hearing loss require to be tested with words primarily having high frequency speech sounds. To meet this need, specific high frequency word tests have been developed (Gardner High Frequency Word Lists, 1971; Pascoe High Frequency Test, 1975; California Consonant Test, 1977; Speech Identification Test for Hindi and Urdu Speakers, 2001; HF-KSIT, Mascarenhas, 2002). In a quiet situation some individuals with high frequency hearing loss may not display any difficulty in perceiving some of the high frequency words due to the presence of redundant cues. In order to decrease the external redundancy, noise can be introduced (Miller, Heise & Lichten, 1951).
  • Speech in noise test has been developed with words covering all phonemes of the language (Egan, 1948). However, a speech in noise test, which includes only high frequency Bengali words, has not been developed.
  • This test would be highly useful in selecting amplification devices for those hearing impaired individuals with the gradual sloping hearing loss, which do not depict any difficulty in perceiving high frequency Words in a typical test situation where noise was not used. Prior to utilizing the test on individuals with hearing impairment it is essential to get normative data. This information would enable the audiologist to know how deviant the hearing impaired individual is when compared to normal hearing individuals. Hence, it is not only essential to develop a high frequency speech in noise test, but also necessary to obtain normative data.

Goals of the study

  1. To develop a speech perception in noise test making use of high frequency words and restaurant noise.
  1. To obtain normative data for the development test on Bengali speaking adults.
  1. To compare the norms across different single-to-noise ratios.
  1. To study the effect of gender on the developed test.

METHOD

The aim of the present study was to develop normative data for speech perception in noise with high frequency Bengali words as stimuli for adult Bengali speakers. The study was done in two stages:

Stage I: The development of the test material

Stage II:  Administration of the test on normal hearing individuals

Stage I:  Development of the test material

The material used for the study was obtained from the High Frequency Bengali Speech Identification test (HF-BSIT) developed by a group of linguist & speech pathologists. Each word subtest contained 25 words having equal distribution of high frequency consonants. The material was developed using the Cool Edit software. The recorded version of HF-BSIT was copied and pasted on one track while restaurant noise was recorded on a second track. It was ensured that the noise and speech signal were of equal loudness, by normalizing the signals. Prior to each list, a 1000 Hz calibration tone was recorded in each word list, and was used to adjust the VU meter of the audiometer to zero.

Stage II: Administration of the Test

Subjects

     Forty Bengali speaking adults (twenty female and twenty male) aged between 18-30 years were tested. The average age of the normally hearing subjects was 22.4 years. Listeners satisfied the following criteria: (a) bilateral pure tone air and bone conduction thresholds of less than or equal to 15 dB hearing level (HL;ANSI, 1996) for the octave frequencies 250 to 8000 Hz; (b) normal bilateral immittance results; (c) air-bone gap of less than 10 dB HL; (d) no documented history of otitis media; (e) no apparent articulatory abnormality and (f) should be literate.

Equipment and Speech Material

      The subjects were tested using a Madsen electronics orbiter OB 922 clinical audiometer using TDH-39 headphones with MX41/AR cushions and B 71 bone vibrator. The audiometer was calibrated according to ANSI 1996 standards. Immittance testing was done using GSI Tympstar. Speech material consisted of words from the High Frequency Bengali Sentence Identification Test. The material developed for the study was played using the Cool Edit software. The signals from the two tracks were rooted from a Pentium IV computer to the tape and auxiliary input of a clinical audiometer (Orbiter OB 922). It was ensured that signals from the two tracks were sent to two different channels but to the same ear. The intensity of the two tracks was manipulated using the attenuator dial of the audiometer. The two-word lists, each consisting of 25 words were routed from the computer to a clinical audiometer (Orbiter OB 922) and presented to each participant through an MX41/AR earphone. 

Environment

     The testing was done in a sound treated double room, with the ambient noise levels within permissible limits as recommended by ANSI, 1991 (S3,1-1991; cited in Wilber, 1994).

Procedure 

1.For subject selection:

Initially all subjects were tested for pure tone thresholds. The testing was done for the frequencies 250 Hz to 8000 Hz for air-conduction and 250 Hz 4000 Hz for bone-conduction. All the subjects were also tested for normal middle ear function using the tympanometry and acoustic reflex test.

2.For obtaining speech-in-noise scores

The individuals who passed the subject selection criteria were recruited for obtaining speech identification in presence of noise. The subjects were initially instructed that they would be hearing speech and noise in one ear. They were asked to attend to the speech signals and write down what they heard. Subjects were also informed that they could guess the test items in case they were not very clear. The subjects were tested 40 dB above their pure tone average (average of thresholds of speech frequencies 500 Hz, 100 Hz and 2000 Hz), (ASHA, 1997, cited in Rupp & Stockdell, 1980). The subjects were tested either in the right ear or in the left ear. Half of the subjects were tested in the right ear while the other half was tested in the left ear. The noise levels were varied so as to present the signals at 0, + 10 and + 20 SNR. All subjects initially heard the test material at 0 dB SNR followed by + 10 and + 20 dB SNR. The subjects heard the same list at 0dB SNR condition and +20 conditions. A different list was heard in the +10 dB SNR condition. Half of the subjects were tested with list I in the first and last noise condition, while the other half of the subjects heard list II. Thus, it was ensured that all subjects were tested in the three SNR conditions.

Scoring

     The responses obtained from the subjects were scored as right or wrong. Each correct word was given a score of one and a wrong word was given a score of zero. The responses obtained from the subjects were statistically analysed.

RESULTS AND DISCUSSION

The data obtained from the normal population was analyzed using SPSS 10.0 version. Analysis of variance (ANOVA) was done for:

  •      Effect of SNR
  •      Effect of list
  •      Effect of gender on Speech identification scores in different SNR

1) Effect of SNR

     An initial analysis using analysis of variance (ANOVA) showed a significant effect of SNR on speech identification scores (SIS) for both lists {(F(2,57)=102.38, p<0.05) for List I and (F(2,57)=191.435, p<0.05) for List II}. The effect of SNR across lists was analyzed using Tukey’s post hoc test. It revealed that there is a significant effect of SNR at 0, 10 and 20 dB for both Lists I and II (Table 1). Figure 1 depicts the mean speech identification scores and standard deviations at different SNRs, across lists and gender. It suggests that 20 dB SNR gave the best SIS, where as the 0 dB SNR gave the worst SIS, with the SIS at 10 dB SNR being in between. This was seen for both the Lists I and II.

Table 1: Effect of SNRs on SIS

SNR (dB) ‘p’ value
List I List II
0 vs. 10 0.000* 0.000*
10 vs. 20 0.007* 0.000*
0 vs. 20 0.000* 0.000*

*Significant at 0.01 levels.

Initial analysis revealed that five of the words in each of the lists were extremely difficult for the normal hearing subjects in 0 SNR condition. 72% to 75% of the subjects found these words difficult in the 0 SNR condition in both the lists. Hence, it was decided to drop difficult words in the 0 SNR condition in both the lists. Hence, it was decided to drop words that had more than 50 % of the subjects not identifying them. The words that were included in list I and list II for the final analysis is depicted in the appendix A.

The results of the present study concur with the documentation of the reduced discrimination performance in noise of the normal hearing listeners. Earlier studies (Young & Herbert, 1970; Keith & Talis, 1970, 1972; Olsen, Noffsinger & Kurdziel, 1975) also reported an identification decrement in the presence of noise in normal hearing adults. The reason why speech identification scores decreased with decrease in SNR is due to the greater masking effect that takes place (Nelson, Schoder & Wojtczak, 2001). On account of the masking, the external redundancy present in the speech signal decreases, making it difficult for the subject to perceive the signal.

Figure 1: The mean speech identification scores. Error bars show +/- 1 SD

2)  Effect of list

To check if the two lists that were used in the present study were equal, a one-way ANOVA was carried out. It showed no statistically significant difference between lists at different SNR’s. Table 2 shows the summary of this analysis. No significant difference was noted at both the 0.01 and 0.05 levels.

Table 2: Mean, SD and F-values for list I and II

SNR(Db) List I List II ‘F’ value
Mean+ SD Mean+ SD
0 13.45 1.79 13.20 1.24 0.263
10 17.75 1.12 17.25 1.02 0.196
20 19.05 0.76 19.32 0.81 1.45

+ Maximum score =20

The analysis that was done prior to the deletion of the five words from the two lists showed that the lists were not equal. The lists were found to be unequal in the lower SNR conditions (0 dB SNR and 10dB SNR). However, at the higher SNR condition (20 dB SNR) the lists were found to be equal. In the presence of lower SNR’s, the intelligibility of certain words probably dropped making it difficult for the subjects to perceive them. On substraction of these words the inequality of two lists disappeared. Hence, it is recommended that while using HF-BSIT in the presence of noise, use only the words included in the two lists given in appendix A, and not the entire original list.

3) Effect of gender

     One-way ANOVA was performed to see the effects of gender on speech identification scores at different SNRs for both the lists. The mean, SD and ‘F’ value is shown in Table 3. For both list I and II, at 0 dB SNR, there was a significant difference between SIS of males and females. At 0 dB SNR, the SIS in females was higher than that of males. However, no effect of gender could be observed at 10 and 20 dB SNR, for both the lists.

Table 3: Mean, SD and F values for effect to gender

List SNR(dB) Male Female ‘F’-value
Mean+ SD Mean+ SD
I 0 12.6 1.96 14.3 1.16 5.594*
10 17.7 1.25 17.8 1.03 0.038
20 18.8 0.79 17.3 0.67 2.32
II 0 12.6 0.96 13.8 1.22 5.89*
10 18.1 1.19 17.7 0.82 0.758
20 19.1 0.99 19.6 0.51 1.991

* Significant at 0.05 level; + Maximum score=20

Such gender difference, in the presence of noise, has been reported by Gatehouse (1994). According to Gatehouse, males needed more intensity to “just follow” speech in quite as well as in noise backgrounds compared to females. Similar findings have also been reported by Govil (2002). In this study, the author reported that in the presence of noise, females in three different age groups (6-8 years, 8-10 years, and 18-30 years) obtained significantly higher scores than males. He utilized an SNR of 10 dB.

A possible reason for the above finding, as to why females obtain higher scores in the presence of noise, could be because of females being able to use both the hemispheres for processing compared to males. This inference is based on an investigation by Kanasaku, Yamaura and Kitazawa (2000), who reported that females use the posterior temporal lobe more bilaterally during linguistic processing of global structures compared to males.

Hence, it is recommended that while using speech in noise test the response of a client should be compared to the norms of a particular sex. This should be specially done at lower SNR’s.

Tables 4 and 5 indicate that with a decrease in SNR, the number of subjects who could not perceive specific words was high. For the 0 dB SNR, depending on the word, 5% to 50% of the subjects did not perceive the stimulus. No word was perceived by all the subjects in this noise condition. List I had words that were not perceived by a larger number of subjects, while in list II, this variability was less.

The number of subjects who did not perceive words correctly was lesser for the 10 dB SNR and 20 dB SNR condition. Based on these findings, it is recommended that while testing hearing impaired individuals in the presence of noise, the 0 dB SNR condition should not be used. This condition is difficult even for normal hearing individuals.

Table 4: Percentage of subject in which error was seen for specific words in list I

Words % of subjects in which error seen for specific words
OdB SNR 10 dB SNR 20 dB SNR
Shikhā 30% 15% 5%
Sḣiit 45% 25% 5%
Karchila 35%
Shakāl 20% 5%
Kichu 20%
Sandhyā 25% 5%
Ciṯkār 50% 30% 10%
Spasta 25% 10% 5%
Hājār 15%
Steśane 50% 20% 10%
Sāikele 40% 15%
Chābi 30% 10%
Hāngor 20% 5% 5%
Choti 40% 10% 5%
Kāgoj 30%
Karpur 35% 10%
Shanibār 45% 15%
Jāhāj 45% 15%
Jhākuni 50% 25% 10%
Cḣitkāni 55% 30% 15%

Table 5: Percentage of subject in which error was seen for specific words in list II

Words % of subjects in which error seen for specific words
OdB SNR 10 dB SNR 20 dB SNR
Kariyāchi 50% 10% 5%
Sāptāha 35% 5%
Ýāchen 50% 20% 10%
Hiṅgshuk 40% 15% 5%
Haengla 45% 10% 5%
Kāk 25%
Hānshi 35%
Snigdhā 50% 20% 15%
Kokil 25%
Ciṉmoy 40% 10% 10%
Tānpora 30% 5%
Chāgol 35% 5%
Jhātā 40% 10% 5%
Shāp 40%
Ýā tra 45% 20% 5%
Chābuk 30% 10%
Chātā 35% 5%
Jhogrā 40%
Chāmach 40%
Shābdhān 45% 10% 10%

From the above data analysis, it may be concluded that:

  1. The material developed (HF-BSIT with a background competition of restaurant noise) may be used to check the perception of individuals in difficult listening conditions.
  2. With the decrease in SNR, the speech identification scores deceased. This was seen for both list I and list II that were used in the present study.
  3. List I and II were found to be equal after deletion of five words from each list, that were difficult for the majority of the subjects to perceive.
  4. Males and females performed equally well when an SNR of 20 dB was used. However, when the SNR was reduced to 0 dB SNR, females out-performed the males.
  5. The two-word subtest of HF-BSIT can be used to evaluate speech in noise performance provided the list be modified as given in appendix A.
  6. It is recommended that while testing hearing impaired individuals, the 0 dB SNR condition should not be used, as normal hearing individuals also found this condition to be too difficult.

SUMMARY AND CONCLUSIONS

     Daily communication requires the ability to understand speech in varying degrees of noise. Normal hearing individuals do not complain about understanding speech in quiet environments, but may have some difficulty with understanding speech in noisy environments (Wilson & Strouse, 1999). It has been established that individuals with sensorineural hearing loss (SNHL) demonstrate greater difficulty understanding speech in background noise than do normal hearing individuals under the same conditions (Dubno, Dirks & Morgan, 1984).

The present study was undertaken to develop a speech perception in noise test making use of high frequency words and restaurant noise and obtain norms on a sample of Bengali speaking adults. The effect of SNR and gender was studied. The material of the study was established using word subtest of HF-BSIT, developed by group of linguist & speech pathologists (2016) and restaurant noise. One track of the software program Cool Edit pro had the above word list while another track had the restaurant noise. The test material was administered on 40 normal hearing adults. They were tested using three different SNR’s i.e., 0, 10 and 20 dB SNR.

Analysis of the data was done using ANOVA. The analysis revealed the following:

  1. The material developed (HF-BSIT with a background competition of restaurant noise) may be used to check the perception of individuals in difficult listening conditions.
  2. With the decrease in SNR, the speech identification scores decreased. This was seen for both list I and list II that was used in the present study.
  3. List I and II were found to be equal after deletion of five words from each list, that were difficult for the majority of the subjects to perceive.
  4. Males and females performed equally well when an SNR of 20 dB was used. However, when the SNR was reduced to 0 dB SNR, females out-performed the males.
  5. The two-word subtest of HF-BSIT can be used to evaluate speech in noise performance provided the List be modified as given in appendix A.
  6. It is recommended that while testing hearing impaired individuals, the 0 dB SNR condition should not be used, as normal hearing individuals also found this condition to be too difficult.

IMPLICATIONS:

  1. The present speech-in noise would be useful in evaluating individuals with gradual hearing loss, who complain of auditory perception problem but do not demonstrate having a problem with routine speech tests.
  2. It would be useful in selecting amplification devices for individuals with gradual sloping hearing loss.

BIBLIOGRAPHY

ANSI: American National Standard Institute (1996), Specifications for audiometers. ANSI, S3, 5-1986, NY: American National Standard Institute.

Beattie, R.C. (1989), Word recognition functions for the CID W-22 test in multitalker noise for normally hearing and hearing impaired subjects. Journal of Speech Hearing Disorders, 54, 20-32.

Dirks, D.D., Dubno, J.R., & Morgan, D.E. (1984), Effects of age and mild hearing loss on speech recognition in noise. Journal of Acoustical Society of America, 86, 1374-1383.

Egan, J. (1948). Articulation testing methods. The Laryngoscope, 58, 955-991.

Shojael, E. (2016) – Effect of signal to noise ratio on the speech perception  ability of older adults. Med J Islam Repub Iran, 30, 342.

Erber, N.P. (1969), Interaction of audition and vision in the recognition of oral speech stimuli. Journal of Speech Hearing Research, 12, 423-425.

Festen, J.M. & Plomp, R. (1990). Effects of fluctuating noise and interfering speech on the perception threshold for impaired and normal hearing. Journal of Acoustical Society of America, 88, 1725-1736.

GardnerH.J. (1971), Application of high frequency consonant discrimination     word Test in hearing aid evaluation. Journal of Speech and Hearing Disorders, 36, 344-355.

Gatehouse, S. (1994). Components and determinants of hearing aid benefit. Ear and Hearing, 15, 34-45.

Govil, S. (2002). Contralateral suppression of OAE and speech in noise: effects of age, gender and ear. Unpublished Master’s Dissertation. University of Mysore, Mysore.

Hutcherson, R.W. Dirks, D.D., & Morgan, D.E. (1995). Evaluation of the sppech perception in noise (SPIN) test. Otolaryngology Head Neck Survery, 87(2), 239-45.

Jayaram, M. Baguley, D.M., Moffat, D.A.(1992). Speech in noise: a practical test procedure. Journal of Laryngologist and Topology, 106, 105-10.

Kalikow, D.N., Stevens, K.N. & Elliot, L.L. (1977). Development of a test of speech intelligibility in noise using sentence materials with controlled word predictability. Journal of the Acoustical Society of America, 61, 1337-1351.

Kanasaku, K., Yamura, A., & Kitazawa, S. (2000). Sex difference in lateralization revealed in the posterior language area. Cerebral cortex, 10(9), 862-872.

Keith, R. & Talis, H. (1970). The use of speech in noise in diagnostic audiometry. Journal of Auditory Research, 10, 201.

Keith, R. & Talis, H. (1972). The effects of white noise on PB scores on normal and hearing impaired listeners. Audiology, 11, 177.

Ludvigsen C. (1973). Auditive and audiovisual perception of PB words masked with white noise. Scandinavian Audiology, 2; 107-111.

Lutman, .ME; Brown, E.J., & Coles, R.R.A. (1986). Self reported disability and handicap in the population in relation to pure tone threshold, age, sex and type of hearing loss. British Journal of Audiology, 21, 45-58.

Markides, A. (1986). Speech levels and speech-to-noise ratios. British Journal of Audiology, 20; 115-120.

Martin F. N. (1994). Hearing aid selection, in Introduction to Audiology, 256-279, Prentice Hall, Englewood Cliffs.

Miller, G. A., Heise, G. A., & Lichten, W. (1951). The intelligibility of speech as a function of the context of the test materials. Journal of Experimental psychology 41, 329-335.

Nelson, D.A., Schroder, A.C., & Wojtcjak, M. (2001). Effect of forward masking on speech identification scores. Journal of Acoustical society of America, 110(4), 2045-64.

Nilson, M., Soli, S.D., & Sullivan, J. (1994)J. (1994) Development of the hearing in noise test for the measurement of speech reception thresholds in quiet and in noise. Journal of the Acoustical Society of America, 95, 1085-1099.

O’Neil, J.J. (1954). Contribution of the visual components and symbols to speech white noise. Scandinavian Audiology, 2, 107-111.

Olsen, W., Noffsinger, D., & Kurdziel, S. (1975), Speech discrimination in quiet and in while Noise by patients with peripheral and central lesions. Acta otolaryngologica, 80, 375.

Owen, E. & Schubert, E.D(1977), Development of California Consonant Test. Journal of Speech and Hearing Research, 20, 463-474.

Palva, T. (1955). Studies of hearing for pure tones and speech in noise. Acta  Otolaryngology, May-Jun, 45(3), 231-43.

Pascoe, D.P. (1975). Frequency responses of hearing aids and their effects on the speech perception of hearing impaired subjects. Annals of Otology, Rhinology and Laryngology, Supplement, 23, 1-40.

Plomp. R. (1978). Auditory handicap of hearing impairment and the limited benefit of hearing aids. Journal of the Acoustical Society and Hearing Research, 29, 146-154.

Plomp,R. (1986). A signal to noise ratio model for the speech reception threshold of the hearing impaired. Journal of Speech and Hearing Research, 29, 146-154.

Ramachandra, P. (2001), High Frequency Speech Identification Test for Hindi and Urdu Sepakers. Unpublished Master’s Dissertation. University of Bangalore, Bangalore.

Ramos de Miguel (2015) – Effects of high frequency supression for speech recognition in noise in Spanish normals. Oto Neurotol, 36, 720-726.

Rupp, R.R. & Stockdell (1980). Advice for treating the hearing impaired. Geriatrics 38(10), 35-40.

Sanders, D.A. & Goodrich S.J. (1971). The relative contribution of visual and auditory components of speech to speech intelligibility as a function of three conditions of frequency discrimination. Journal of Speech Hearing Research, 14, 154-159.

Sumby, W.H. & Pollack, I, (1954). Visual contribution to speech intelligibility in noise. Journal of the Acoustical Society of America, 26; 212-215.

Tschopp, K. & Zust, H. (1993), Influence of context on speech understanding ability using German sentence test materials. Scandinavian Audiology, 22, 251-225.

Wilber, L.A. (1994). Calibration, puretone, speech and noise signals. In J. Katz (Eds.), Handbook of Clinical Audiology (5th ed.) (pp. 73-97). Baltimore: Williams and Wilkins.

Wilson, R.H. & Strouse, A. (1999). Word recognition in multi-talker babble. American Speech Language and Hearing Association Convention.

Young, I. & Herbert, F. (1980). Noise effects on speech discrimination score. Journal of Auditory Research, 10, 127.

APPENDIX – A

Words included in Speech-in-Noise tests for High Frequency Bengali Worlds

LIST – I                                              LIST – II

Shikhā Kariyāchi
Sḣiit Sāptāha
Karchila Ýāchen
Shakāl Hiṅgshuk
Kichu Haengla
Sandhyā Kāk
Ciṯkār Hānshi
Spasta Snigdhā
Hājār Kokil
Steśane Ciṉmoy
Sāikele Tānpora
Chābi Chāgol
Hāngor Jhātā
Choti Shāp
Kāgoj Ýā tra
Karpur Chābuk
Shanibār Chātā
Jāhāj Jhogrā
Jhākuni Chāmach
Cḣitkāni Shābdhān

[/vc_toggle][/vc_column][/vc_row][vc_row][vc_column][vc_tta_accordion active_section=”1″ css=”.vc_custom_1542024326698{background-color: #b7b7b7 !important;border-radius: 25px !important;}”][vc_tta_section title=”Master in Clinical Audiology and Hearing Therapy” tab_id=”1542023790379-a7d13a27-d1f37dc8-b6ed”][vc_column_text]Duration: 1 year

Modality: Online

Certified by University Isabel I

[/vc_column_text][/vc_tta_section][/vc_tta_accordion][/vc_column][/vc_row]

 

Leave a Reply

SPEECH PERCEPTION IN NOISE (SPIN) WITH BENGALI HIGH FREQUENCY WORDS: NORMATIVE DATA

Saugat Roy (Kolkata, India), Chief Audiologist at Audient Hearing Solutions and student of SAERA.

ABSTRACT

     For a normal auditory system, understanding speech in background noise is an extremely difficult task. In a quite situation, an individual may not display any difficulty in perceiving the words due to the presence of redundant cues. The scenario changes in day to day situation where there is an exposure to various composite noises. To assess this need, High frequency bengali speech identification test [HF-BSIT] was developed in bangla to meet the colloquial need of the population. This test will also be useful in selecting amplification devices for hearing impaired individuals with gradual sloping loss. The test material was standardized in 40 native bengali speaking adults (twenty males and twenty females) aged between 18-30 years. The results also highlighted the importance of signal to noise ratio and gender.

INTRODUCTION

     Understanding speech in background noise is occasionally difficult for normal listeners. It has been demonstrated that sensorineural hearing impairment is connected with a loss of recognition in noise compared to normal hearing (Palva, 1955; Plomp, 1978). There is a small portion of people with normal hearing thresholds and speech recognition in quiet surroundings who have great difficulty in managing in an everyday noisy environment (dysacusis). Even though “speech in noise” tests do not provide any significant help in localizing a lesion in the hearing system (Jayaram, Baguley & Moffat, 1992), a fast and reliable test helps in predicting and assessing the benefit of amplification (Plomp, 1978; Dirks, Dubno & Morgan, 1984), in assessing job suitability and alleviating medico-legal work (Lutman, Brown & Coles, 1986).

     Understanding speech in adverse conditions is an extremely important and challenging task for the human auditory system (Beattie, Barr & Roup, 1997). During daily conversation, most people possess the ability to “tune out” interfering noises that emanate from various directions, focusing instead on signals of interest. When adverse conditions disrupt speech perception, miscommunication is usually a temporary, albeit annoying, inconvenience because most conversations offer ample opportunity to repeat words or phrases not initially understood (Beattie, 1989). The ability to understand speech in noise depends upon multiple factors such as the characteristics of the speech signal, the signal-to-noise ratio, and the listener’s degree of hearing impairment. A routine hearing evaluation usually does not provide ample information about a listener’s functional communication abilities.

     In everyday listening conditions, there is always some noise present. The listener, however also observes the speaker, which increases the perception of speech in noise (O’ Neil, 1954; Sumby & Pollack, 1954; Erber, 1969; Sanders & Goodrich, 1971; Ludvigsen, 1973). In such conditions, the speaker tries to compensate for the noise interference by raising the level of voice in order to keep the subjective loudness of speech in noise equal to the loudness of speech in quiet (Markides, 1986).

     Daily communication requires the ability to understand speech in varying degrees of noise. Normal hearing individuals do not complain about understanding speech in quiet environments, but may have some difficulty with understanding speech in noisy environments (Wilson & Strouse, 1999). It has been established that individuals with sensorineural hearing loss (SNHL) demonstrate greater difficulty understanding speech in background noise than do normal hearing individuals under the same conditions (Dubno, Dirks & Morgan, 1984). Each one of these above mentioned variables interact and play a role in determining how well one understands speech in any given environment (Nilson, Soil & Sullivan, 1994). Listeners with identical word recognition abilities in quiet background can have significantly different word recognition abilities in noisy background (Beattie, Barr & Roup 1997; Wilson & Strouse, 1999).

     Speech is a redundant auditory signal comprised of many bits of information (Martin, 1994). Similarly, our language structure has both extrinsic and intrinsic language redundancies. The extrinsic redundancies involve information obtained from phonemes and syntax. The listener possesses the intrinsic language redundancies based on their experiences with the language (Miller, Heise & Lichten, l951).The more extrinsic and intrinsic redundancies available, the easier it becomes to understand the speech signal (Miller, Heise & Lichten, 1951). Due to speech redundancy, normal-hearing individuals can understand the signal even though it may be highly degraded, as in a crowded restaurant (Wilson & Strouse, 1999).

     The redundancy of the speech signal varies depending on whether one is listening to words in isolation, listening to sentences or participating in a conversation (Festen & Plomp, 1990). Generally, it is much easier to understand longer speech signals than short ones, even when the speech is embedded in background noise. Sentences are the easiest signal to understand as they provide the listener with acoustic information, semantic and contextual cues, and linguistic content. These signals provide greater redundancy. It is much easier to understand a conversation about a known subject, than single syllable words. Monosyllabic words, embedded in background noise, are the most difficult speech signal to comprehend. However, due to the increased redundancy and contextual cues in sentence materials, it becomes more difficult to determine whether the listener has perceived the entire sentence or has responded to a few key words that convey the meaning of the sentence (Wilson & Strouse, 1999).

     Speech-in-noise testing has what some consider an advantage over many other central tests in that no specially recorded tapes or procedures are required for administration. Only a conventional audiometer and standardized recorded monosyllabic words lists are required for administration. The clinician with the more cavalier approach may even choose to use monitored live voice. It is this ease of administration which has caused-in-noise testing to be one of the most used, and perhaps misused, speech tests of Central Auditory Nervous System (CANS) function. Clinicians unaccustomed and unprepared in conducting central testing often resort to this test when faced with a patient with a possible CANS disorder. Unfortunately, the testing is frequently conducted without normative data and in many cases without knowledge of the actual SN ration (i.e. identical audiometer dial settings for the speech and noise signals do not guarantee a 0 dB SN ratio measured in sound pressure level (SPL)). A reasonable method for reducing the variability of speech-in-noise recognition measurements is to record the monosyl1ables and the white noise at the desired SN ratio on the same tape track. Additionally, by presenting both the speech and noise from a single recording, the second channel of the audiometer will be available for presenting contralateral masking, if necessary.

     High frequency components above 8 Khz contribute to speech understanding in noise for subjects with normal hearing (Ramos de Miguel et al., 2015). Even the study of the effect of signal to noise ratio on speech perception ability of older adults showed reduced speech perception ability at low-mid thresholds, when the signal was decreased and the noise was increased (Shojael et.al., 2016).

     Though speech-in-noise tests have primarily been used in testing for the presence of an auditory processing problem, it has also gained importance as a realistic test to determine the utility of hearing aids.

     Sparseness and redundancy give rise to an account of speech perception in noise based on glimpsing. Many studies have demonstrated that a single competing talker or amplitude-modulated noise is a far less effective masker than multi speaker babble or speech-shaped noise (Festen & Plomp, 1990).

Need for the study

  • Individuals with high frequency hearing loss require to be tested with words primarily having high frequency speech sounds. To meet this need, specific high frequency word tests have been developed (Gardner High Frequency Word Lists, 1971; Pascoe High Frequency Test, 1975; California Consonant Test, 1977; Speech Identification Test for Hindi and Urdu Speakers, 2001; HF-KSIT, Mascarenhas, 2002). In a quiet situation some individuals with high frequency hearing loss may not display any difficulty in perceiving some of the high frequency words due to the presence of redundant cues. In order to decrease the external redundancy, noise can be introduced (Miller, Heise & Lichten, 1951).
  • Speech in noise test has been developed with words covering all phonemes of the language (Egan, 1948). However, a speech in noise test, which includes only high frequency Bengali words, has not been developed.
  • This test would be highly useful in selecting amplification devices for those hearing impaired individuals with the gradual sloping hearing loss, which do not depict any difficulty in perceiving high frequency Words in a typical test situation where noise was not used. Prior to utilizing the test on individuals with hearing impairment it is essential to get normative data. This information would enable the audiologist to know how deviant the hearing impaired individual is when compared to normal hearing individuals. Hence, it is not only essential to develop a high frequency speech in noise test, but also necessary to obtain normative data.

Goals of the study

  1. To develop a speech perception in noise test making use of high frequency words and restaurant noise.
  1. To obtain normative data for the development test on Bengali speaking adults.
  1. To compare the norms across different single-to-noise ratios.
  1. To study the effect of gender on the developed test.

METHOD

     The aim of the present study was to develop normative data for speech perception in noise with high frequency Bengali words as stimuli for adult Bengali speakers. The study was done in two stages:

      Stage I: The development of the test material

     Stage II:  Administration of the test on normal hearing individuals

Stage I:  Development of the test material

     The material used for the study was obtained from the High Frequency Bengali Speech Identification test (HF-BSIT) developed by a group of linguist & speech pathologists. Each word subtest contained 25 words having equal distribution of high frequency consonants. The material was developed using the Cool Edit software. The recorded version of HF-BSIT was copied and pasted on one track while restaurant noise was recorded on a second track. It was ensured that the noise and speech signal were of equal loudness, by normalizing the signals. Prior to each list, a 1000 Hz calibration tone was recorded in each word list, and was used to adjust the VU meter of the audiometer to zero.

Stage II: Administration of the Test

Subjects

     Forty Bengali speaking adults (twenty female and twenty male) aged between 18-30 years were tested. The average age of the normally hearing subjects was 22.4 years. Listeners satisfied the following criteria: (a) bilateral pure tone air and bone conduction thresholds of less than or equal to 15 dB hearing level (HL;ANSI, 1996) for the octave frequencies 250 to 8000 Hz; (b) normal bilateral immittance results; (c) air-bone gap of less than 10 dB HL; (d) no documented history of otitis media; (e) no apparent articulatory abnormality and (f) should be literate.

Equipment and Speech Material

      The subjects were tested using a Madsen electronics orbiter OB 922 clinical audiometer using TDH-39 headphones with MX41/AR cushions and B 71 bone vibrator. The audiometer was calibrated according to ANSI 1996 standards. Immittance testing was done using GSI Tympstar. Speech material consisted of words from the High Frequency Bengali Sentence Identification Test. The material developed for the study was played using the Cool Edit software. The signals from the two tracks were rooted from a Pentium IV computer to the tape and auxiliary input of a clinical audiometer (Orbiter OB 922). It was ensured that signals from the two tracks were sent to two different channels but to the same ear. The intensity of the two tracks was manipulated using the attenuator dial of the audiometer. The two-word lists, each consisting of 25 words were routed from the computer to a clinical audiometer (Orbiter OB 922) and presented to each participant through an MX41/AR earphone. 

Environment

     The testing was done in a sound treated double room, with the ambient noise levels within permissible limits as recommended by ANSI, 1991 (S3,1-1991; cited in Wilber, 1994).

Procedure 

1.For subject selection:

     Initially all subjects were tested for pure tone thresholds. The testing was done for the frequencies 250 Hz to 8000 Hz for air-conduction and 250 Hz 4000 Hz for bone-conduction. All the subjects were also tested for normal middle ear function using the tympanometry and acoustic reflex test.

2.For obtaining speech-in-noise scores

     The individuals who passed the subject selection criteria were recruited for obtaining speech identification in presence of noise. The subjects were initially instructed that they would be hearing speech and noise in one ear. They were asked to attend to the speech signals and write down what they heard. Subjects were also informed that they could guess the test items in case they were not very clear. The subjects were tested 40 dB above their pure tone average (average of thresholds of speech frequencies 500 Hz, 100 Hz and 2000 Hz), (ASHA, 1997, cited in Rupp & Stockdell, 1980). The subjects were tested either in the right ear or in the left ear. Half of the subjects were tested in the right ear while the other half was tested in the left ear. The noise levels were varied so as to present the signals at 0, + 10 and + 20 SNR. All subjects initially heard the test material at 0 dB SNR followed by + 10 and + 20 dB SNR. The subjects heard the same list at 0dB SNR condition and +20 conditions. A different list was heard in the +10 dB SNR condition. Half of the subjects were tested with list I in the first and last noise condition, while the other half of the subjects heard list II. Thus, it was ensured that all subjects were tested in the three SNR conditions.

Scoring

     The responses obtained from the subjects were scored as right or wrong. Each correct word was given a score of one and a wrong word was given a score of zero. The responses obtained from the subjects were statistically analysed.

RESULTS AND DISCUSSION

     The data obtained from the normal population was analyzed using SPSS 10.0 version. Analysis of variance (ANOVA) was done for:

  •      Effect of SNR
  •      Effect of list
  •      Effect of gender on Speech identification scores in different SNR

1) Effect of SNR

     An initial analysis using analysis of variance (ANOVA) showed a significant effect of SNR on speech identification scores (SIS) for both lists {(F(2,57)=102.38, p<0.05) for List I and (F(2,57)=191.435, p<0.05) for List II}. The effect of SNR across lists was analyzed using Tukey’s post hoc test. It revealed that there is a significant effect of SNR at 0, 10 and 20 dB for both Lists I and II (Table 1). Figure 1 depicts the mean speech identification scores and standard deviations at different SNRs, across lists and gender. It suggests that 20 dB SNR gave the best SIS, where as the 0 dB SNR gave the worst SIS, with the SIS at 10 dB SNR being in between. This was seen for both the Lists I and II.

Table 1: Effect of SNRs on SIS

SNR (dB) ‘p’ value
List I List II
0 vs. 10 0.000* 0.000*
10 vs. 20 0.007* 0.000*
0 vs. 20 0.000* 0.000*

*Significant at 0.01 levels.

     Initial analysis revealed that five of the words in each of the lists were extremely difficult for the normal hearing subjects in 0 SNR condition. 72% to 75% of the subjects found these words difficult in the 0 SNR condition in both the lists. Hence, it was decided to drop difficult words in the 0 SNR condition in both the lists. Hence, it was decided to drop words that had more than 50 % of the subjects not identifying them. The words that were included in list I and list II for the final analysis is depicted in the appendix A.

     The results of the present study concur with the documentation of the reduced discrimination performance in noise of the normal hearing listeners. Earlier studies (Young & Herbert, 1970; Keith & Talis, 1970, 1972; Olsen, Noffsinger & Kurdziel, 1975) also reported an identification decrement in the presence of noise in normal hearing adults. The reason why speech identification scores decreased with decrease in SNR is due to the greater masking effect that takes place (Nelson, Schoder & Wojtczak, 2001). On account of the masking, the external redundancy present in the speech signal decreases, making it difficult for the subject to perceive the signal.

Figure 1: The mean speech identification scores. Error bars show +/- 1 SD

2)  Effect of list

     To check if the two lists that were used in the present study were equal, a one-way ANOVA was carried out. It showed no statistically significant difference between lists at different SNR’s. Table 2 shows the summary of this analysis. No significant difference was noted at both the 0.01 and 0.05 levels.

Table 2: Mean, SD and F-values for list I and II

SNR(Db) List I List II ‘F’ value
Mean+ SD Mean+ SD
0 13.45 1.79 13.20 1.24 0.263
10 17.75 1.12 17.25 1.02 0.196
20 19.05 0.76 19.32 0.81 1.45

+ Maximum score =20

     The analysis that was done prior to the deletion of the five words from the two lists showed that the lists were not equal. The lists were found to be unequal in the lower SNR conditions (0 dB SNR and 10dB SNR). However, at the higher SNR condition (20 dB SNR) the lists were found to be equal. In the presence of lower SNR’s, the intelligibility of certain words probably dropped making it difficult for the subjects to perceive them. On substraction of these words the inequality of two lists disappeared. Hence, it is recommended that while using HF-BSIT in the presence of noise, use only the words included in the two lists given in appendix A, and not the entire original list.

3) Effect of gender

     One-way ANOVA was performed to see the effects of gender on speech identification scores at different SNRs for both the lists. The mean, SD and ‘F’ value is shown in Table 3. For both list I and II, at 0 dB SNR, there was a significant difference between SIS of males and females. At 0 dB SNR, the SIS in females was higher than that of males. However, no effect of gender could be observed at 10 and 20 dB SNR, for both the lists.

Table 3: Mean, SD and F values for effect to gender

List SNR

(dB)

Male Female ‘F’-value
Mean+ SD Mean+ SD
I 0 12.6 1.96 14.3 1.16 5.594*
10 17.7 1.25 17.8 1.03 0.038
20 18.8 0.79 17.3 0.67 2.32
II 0 12.6 0.96 13.8 1.22 5.89*
10 18.1 1.19 17.7 0.82 0.758
20 19.1 0.99 19.6 0.51 1.991

* Significant at 0.05 level; + Maximum score=20

     Such gender difference, in the presence of noise, has been reported by Gatehouse (1994). According to Gatehouse, males needed more intensity to “just follow” speech in quite as well as in noise backgrounds compared to females. Similar findings have also been reported by Govil (2002). In this study, the author reported that in the presence of noise, females in three different age groups (6-8 years, 8-10 years, and 18-30 years) obtained significantly higher scores than males. He utilized an SNR of 10 dB.

     A possible reason for the above finding, as to why females obtain higher scores in the presence of noise, could be because of females being able to use both the hemispheres for processing compared to males. This inference is based on an investigation by Kanasaku, Yamaura and Kitazawa (2000), who reported that females use the posterior temporal lobe more bilaterally during linguistic processing of global structures compared to males.

     Hence, it is recommended that while using speech in noise test the response of a client should be compared to the norms of a particular sex. This should be specially done at lower SNR’s.

     Tables 4 and 5 indicate that with a decrease in SNR, the number of subjects who could not perceive specific words was high. For the 0 dB SNR, depending on the word, 5% to 50% of the subjects did not perceive the stimulus. No word was perceived by all the subjects in this noise condition. List I had words that were not perceived by a larger number of subjects, while in list II, this variability was less.

     The number of subjects who did not perceive words correctly was lesser for the 10 dB SNR and 20 dB SNR condition. Based on these findings, it is recommended that while testing hearing impaired individuals in the presence of noise, the 0 dB SNR condition should not be used. This condition is difficult even for normal hearing individuals.

Table 4: Percentage of subject in which error was seen for specific words in list I

Words % of subjects in which error seen for specific words
OdB SNR 10 dB SNR 20 dB SNR
Shikhā 30% 15% 5%
Sḣiit 45% 25% 5%
Karchila 35%
Shakāl 20% 5%
Kichu 20%
Sandhyā 25% 5%
Ciṯkār 50% 30% 10%
Spasta 25% 10% 5%
Hājār 15%
Steśane 50% 20% 10%
Sāikele 40% 15%
Chābi 30% 10%
Hāngor 20% 5% 5%
Choti 40% 10% 5%
Kāgoj 30%
Karpur 35% 10%
Shanibār 45% 15%
Jāhāj 45% 15%
Jhākuni 50% 25% 10%
Cḣitkāni 55% 30% 15%

Table 5: Percentage of subject in which error was seen for specific words in list II

Words % of subjects in which error seen for specific words
OdB SNR 10 dB SNR 20 dB SNR
Kariyāchi 50% 10% 5%
Sāptāha 35% 5%
Ýāchen 50% 20% 10%
Hiṅgshuk 40% 15% 5%
Haengla 45% 10% 5%
Kāk 25%
Hānshi 35%
Snigdhā 50% 20% 15%
Kokil 25%
Ciṉmoy 40% 10% 10%
Tānpora 30% 5%
Chāgol 35% 5%
Jhātā 40% 10% 5%
Shāp 40%
Ýā tra 45% 20% 5%
Chābuk 30% 10%
Chātā 35% 5%
Jhogrā 40%
Chāmach 40%
Shābdhān 45% 10% 10%

     From the above data analysis, it may be concluded that:

  1. The material developed (HF-BSIT with a background competition of restaurant noise) may be used to check the perception of individuals in difficult listening conditions.
  2. With the decrease in SNR, the speech identification scores deceased. This was seen for both list I and list II that were used in the present study.
  3. List I and II were found to be equal after deletion of five words from each list, that were difficult for the majority of the subjects to perceive.
  4. Males and females performed equally well when an SNR of 20 dB was used. However, when the SNR was reduced to 0 dB SNR, females out-performed the males.
  5. The two-word subtest of HF-BSIT can be used to evaluate speech in noise performance provided the list be modified as given in appendix A.
  6. It is recommended that while testing hearing impaired individuals, the 0 dB SNR condition should not be used, as normal hearing individuals also found this condition to be too difficult.

SUMMARY AND CONCLUSIONS

     Daily communication requires the ability to understand speech in varying degrees of noise. Normal hearing individuals do not complain about understanding speech in quiet environments, but may have some difficulty with understanding speech in noisy environments (Wilson & Strouse, 1999). It has been established that individuals with sensorineural hearing loss (SNHL) demonstrate greater difficulty understanding speech in background noise than do normal hearing individuals under the same conditions (Dubno, Dirks & Morgan, 1984).

     The present study was undertaken to develop a speech perception in noise test making use of high frequency words and restaurant noise and obtain norms on a sample of Bengali speaking adults. The effect of SNR and gender was studied. The material of the study was established using word subtest of HF-BSIT, developed by group of linguist & speech pathologists (2016) and restaurant noise. One track of the software program Cool Edit pro had the above word list while another track had the restaurant noise. The test material was administered on 40 normal hearing adults. They were tested using three different SNR’s i.e., 0, 10 and 20 dB SNR.

     Analysis of the data was done using ANOVA. The analysis revealed the following:

  1. The material developed (HF-BSIT with a background competition of restaurant noise) may be used to check the perception of individuals in difficult listening conditions.
  2. With the decrease in SNR, the speech identification scores decreased. This was seen for both list I and list II that was used in the present study.
  3. List I and II were found to be equal after deletion of five words from each list, that were difficult for the majority of the subjects to perceive.
  4. Males and females performed equally well when an SNR of 20 dB was used. However, when the SNR was reduced to 0 dB SNR, females out-performed the males.
  5. The two-word subtest of HF-BSIT can be used to evaluate speech in noise performance provided the List be modified as given in appendix A.
  6. It is recommended that while testing hearing impaired individuals, the 0 dB SNR condition should not be used, as normal hearing individuals also found this condition to be too difficult.

IMPLICATIONS:

  1. The present speech-in noise would be useful in evaluating individuals with gradual hearing loss, who complain of auditory perception problem but do not demonstrate having a problem with routine speech tests.
  2. It would be useful in selecting amplification devices for individuals with gradual sloping hearing loss.

BIBLIOGRAPHY

ANSI: American National Standard Institute (1996), Specifications for audiometers. ANSI, S3, 5-1986, NY: American National Standard Institute.

Beattie, R.C. (1989), Word recognition functions for the CID W-22 test in multitalker noise for normally hearing and hearing impaired subjects. Journal of Speech Hearing Disorders, 54, 20-32.

Dirks, D.D., Dubno, J.R., & Morgan, D.E. (1984), Effects of age and mild hearing loss on speech recognition in noise. Journal of Acoustical Society of America, 86, 1374-1383.

Egan, J. (1948). Articulation testing methods. The Laryngoscope, 58, 955-991.

Shojael, E. (2016) – Effect of signal to noise ratio on the speech perception  ability of older adults. Med J Islam Repub Iran, 30, 342.

Erber, N.P. (1969), Interaction of audition and vision in the recognition of oral speech stimuli. Journal of Speech Hearing Research, 12, 423-425.

Festen, J.M. & Plomp, R. (1990). Effects of fluctuating noise and interfering speech on the perception threshold for impaired and normal hearing. Journal of Acoustical Society of America, 88, 1725-1736.

Gardner, H.J. (1971), Application of high frequency consonant discrimination     word Test in hearing aid evaluation. Journal of Speech and Hearing Disorders, 36, 344-355.

Gatehouse, S. (1994). Components and determinants of hearing aid benefit. Ear and Hearing, 15, 34-45.

Govil, S. (2002). Contralateral suppression of OAE and speech in noise: effects of age, gender and ear. Unpublished Master’s Dissertation. University of Mysore, Mysore.

Hutcherson, R.W. Dirks, D.D., & Morgan, D.E. (1995). Evaluation of the sppech perception in noise (SPIN) test. Otolaryngology Head Neck Survery, 87(2), 239-45.

Jayaram, M. Baguley, D.M., Moffat, D.A.(1992). Speech in noise: a practical test procedure. Journal of Laryngologist and Topology, 106, 105-10.

Kalikow, D.N., Stevens, K.N. & Elliot, L.L. (1977). Development of a test of speech intelligibility in noise using sentence materials with controlled word predictability. Journal of the Acoustical Society of America, 61, 1337-1351.

Kanasaku, K., Yamura, A., & Kitazawa, S. (2000). Sex difference in lateralization revealed in the posterior language area. Cerebral cortex, 10(9), 862-872.

Keith, R. & Talis, H. (1970). The use of speech in noise in diagnostic audiometry. Journal of Auditory Research, 10, 201.

Keith, R. & Talis, H. (1972). The effects of white noise on PB scores on normal and hearing impaired listeners. Audiology, 11, 177.

Ludvigsen C. (1973). Auditive and audiovisual perception of PB words masked with white noise. Scandinavian Audiology, 2; 107-111.

Lutman, .ME; Brown, E.J., & Coles, R.R.A. (1986). Self reported disability and handicap in the population in relation to pure tone threshold, age, sex and type of hearing loss. British Journal of Audiology, 21, 45-58.

Markides, A. (1986). Speech levels and speech-to-noise ratios. British Journal of Audiology, 20; 115-120.

Martin F. N. (1994). Hearing aid selection, in Introduction to Audiology, 256-279, Prentice Hall, Englewood Cliffs.

Miller, G. A., Heise, G. A., & Lichten, W. (1951). The intelligibility of speech as a function of the context of the test materials. Journal of Experimental psychology 41, 329-335.

Nelson, D.A., Schroder, A.C., & Wojtcjak, M. (2001). Effect of forward masking on speech identification scores. Journal of Acoustical society of America, 110(4), 2045-64.

Nilson, M., Soli, S.D., & Sullivan, J. (1994), J. (1994) Development of the hearing in noise test for the measurement of speech reception thresholds in quiet and in noise. Journal of the Acoustical Society of America, 95, 1085-1099.

O’Neil, J.J. (1954). Contribution of the visual components and symbols to speech white noise. Scandinavian Audiology, 2, 107-111.

Olsen, W., Noffsinger, D., & Kurdziel, S. (1975), Speech discrimination in quiet and in while Noise by patients with peripheral and central lesions. Acta otolaryngologica, 80, 375.

Owen, E. & Schubert, E.D(1977), Development of California Consonant Test. Journal of Speech and Hearing Research, 20, 463-474.

Palva, T. (1955). Studies of hearing for pure tones and speech in noise. Acta  Otolaryngology, May-Jun, 45(3), 231-43.

Pascoe, D.P. (1975). Frequency responses of hearing aids and their effects on the speech perception of hearing impaired subjects. Annals of Otology, Rhinology and Laryngology, Supplement, 23, 1-40.

Plomp. R. (1978). Auditory handicap of hearing impairment and the limited benefit of hearing aids. Journal of the Acoustical Society and Hearing Research, 29, 146-154.

Plomp,R. (1986). A signal to noise ratio model for the speech reception threshold of the hearing impaired. Journal of Speech and Hearing Research, 29, 146-154.

Ramachandra, P. (2001), High Frequency Speech Identification Test for Hindi and Urdu Sepakers. Unpublished Master’s Dissertation. University of Bangalore, Bangalore.

Ramos de Miguel (2015) – Effects of high frequency supression for speech recognition in noise in Spanish normals. Oto Neurotol, 36, 720-726.

Rupp, R.R. & Stockdell (1980). Advice for treating the hearing impaired. Geriatrics 38(10), 35-40.

Sanders, D.A. & Goodrich S.J. (1971). The relative contribution of visual and auditory components of speech to speech intelligibility as a function of three conditions of frequency discrimination. Journal of Speech Hearing Research, 14, 154-159.

Sumby, W.H. & Pollack, I, (1954). Visual contribution to speech intelligibility in noise. Journal of the Acoustical Society of America, 26; 212-215.

Tschopp, K. & Zust, H. (1993), Influence of context on speech understanding ability using German sentence test materials. Scandinavian Audiology, 22, 251-225.

Wilber, L.A. (1994). Calibration, puretone, speech and noise signals. In J. Katz (Eds.), Handbook of Clinical Audiology (5th ed.) (pp. 73-97). Baltimore: Williams and Wilkins.

Wilson, R.H. & Strouse, A. (1999). Word recognition in multi-talker babble. American Speech Language and Hearing Association Convention.

Young, I. & Herbert, F. (1980). Noise effects on speech discrimination score. Journal of Auditory Research, 10, 127.

APPENDIX – A

Words included in Speech-in-Noise tests for High Frequency Bengali Worlds

LIST – I                                              LIST – II

Shikhā Kariyāchi
Sḣiit Sāptāha
Karchila Ýāchen
Shakāl Hiṅgshuk
Kichu Haengla
Sandhyā Kāk
Ciṯkār Hānshi
Spasta Snigdhā
Hājār Kokil
Steśane Ciṉmoy
Sāikele Tānpora
Chābi Chāgol
Hāngor Jhātā
Choti Shāp
Kāgoj Ýā tra
Karpur Chābuk
Shanibār Chātā
Jāhāj Jhogrā
Jhākuni Chāmach
Cḣitkāni Shābdhān