Skip to main content

Open Access Automatic Speech Recognition Experiments with a Model of Normal and Impaired Peripheral Hearing

Automatic speech recognition experiments were carried out using a model of normal and impaired peripheral hearing as a front-end preprocessor to a neural-network recognition stage trained and tested over the TIMIT speech database. The simulation of a flat mild/moderate sensorineural hearing loss led to a significant decrease in recognition performance compared to a simulation of normal hearing. Analyses of the confusion matrices using multidimensional scaling techniques showed that the decrements in scores were not associated with significant changes in the pattern of phoneme confusions. Consonant recognition was dominated by the features manner and place of articulation, but the features sonority, frication, voicing, and sibilance could also be detected. Vowel recognition was dominated by the first two formant frequencies. The results are in broad agreement with the speech perception data for normal and hearing-impaired listeners for the type of audiometric configuration simulated. The main discrepancy between the system and human data is the significantly lower recognition performance found for vowels, particularly when simulating normal hearing.

Document Type: Research Article

Publication date: 01 November 1997

  • Access Key
  • Free content
  • Partial Free content
  • New content
  • Open access content
  • Partial Open access content
  • Subscribed content
  • Partial Subscribed content
  • Free trial content