Spoken Word Recognition: A Stage-processing Approach to Language Differences
In recognising spoken words, the retrieval of a unique mental representation from the speech input involves the exceedingly difficult task of locating word boundaries in a quasi-continuous stimulus and of finding the single representation that corresponds to highly variable acoustical forms. Many cognitive psycholinguists have proposed that these segmentation and categorisation problems are easier to solve at the sublexical than at the word level: Some sublexical representation would mediate the mapping between the acoustic signal and the mental lexicon. Accordingly, much effort has gone into disclosing a hypothesised universal perceptual building block, for example the syllable. More recent advances in speech processing research indicate, however, that speakers of different languages process speech by relying on units or segmentation strategies that are appropriate to the phonological properties of their maternal tongue. Recent data on this topic will be reviewed, with special emphasis on the stage-processing approach of the experimental situations and phenomena reported. For example, the syllabic effects observed in fragment detection and the phenomenon of blending dichotically presented words are discussed. It will be argued that although there is a strong case for language specificity in listeners' intuitions about the phonological structure of their language as well as in word recognition, less evidence is available regarding the early perceptual stages.
Open access content
Free trial content