Skip to main content

Free Content Learning location-invariant orthographic representations for printed words

Neural networks were trained with backpropagation to map location-specific letter identities (letters coded as a function of their position in a horizontal array) onto location-invariant lexical representations. Networks were trained on a corpus of 1179 real words, and on artificial lexica in which the importance of letter order was systematically manipulated. Networks were tested with two benchmark phenomena - transposed-letter priming and relative-position priming - thought to reflect flexible orthographic processing in skilled readers. Networks were shown to exhibit the desired priming effects, and the sizes of the effects were shown to depend on the relative importance of letter order information for performing location-invariant mapping. Presenting words at different locations was found to be critical for building flexible orthographic representations in these networks, since this flexibility was absent when stimulus location did not vary.

Keywords: artificial neural networks; orthographic processing; reading; supervised learning

Document Type: Research Article

Affiliations: Laboratoire de Psychologie Cognitive, CNRS & Aix-Marseille University, Marseille, France

Publication date: 01 March 2010

More about this publication?
  • Access Key
  • Free content
  • Partial Free content
  • New content
  • Open access content
  • Partial Open access content
  • Subscribed content
  • Partial Subscribed content
  • Free trial content