A length effect (or lack thereof) has traditionally been used to distinguish between serial and parallel processing. For lexical decision under central presentation, most studies have shown that reaction times are independent of length for words of 4 to 6 letters, leading to the conclusion that letters are processed in parallel. However, examine of a large corpus of data across wide range of lengths (New et al., 2006) revealed a more interesting, complicated picture. When the effects of frequency and neighborhood size are partialled out, length has a facilitative effect for words of 3 to 5 letters (i.e., shorter RTs for longer words), no effect for words of 5 to 8 letters, and an inhibitory effect for words of 8 to 13 letters. These experimental results are shown in blue in the figure below, regraphed from Fig. 2 of New et al. (2006).
It is straightfoward to explain this pattern under the SERIOL model, under the assumption that RT is the sum of two components: (1) the total time that it takes for all of the letters to fire and (2) the time it takes for the lexical network to settle following firing of the last letter. The total firing time is given by Len * firing-time/letter, where the firing-time/letter is assumed to be on the order of 15-20 ms/letter, corresponding to a firing rate of around 60 Hz. It is assumed that the settling function has the shape shown in red above; settling time decreases across increasing word length, and then asymptotes. That is, more letters provide more information, so the lexical network can settle more quickly for longer words. However, there is a limit to how quickly the lexical network can settle, so beyond a certain length the settling function is flat.
The points in green show modeled RT; it is equal to the settling function + length * 20 ms. It is evident that this modeled RT gives an excellent fit to the data. Modeled RT is decreasing across lengths 3 to 5, flat across lengths 5 to 8, and increasing across lengths 8 - 13.
But is there any evidence for the above assumptions? I have previously pointed out that ERP studies by Hauk and Pulvermuller are consistent with these claims. They have shown that increased length initially (~ 100 - 200 ms post-stimulus), leads to increased amplitudes near occipital cortex. Interestingly, this effect is lateralized to the RH, indicating that it is not merely a result of increased visual angle, as such an effect would be symmetric. Rather it is consistent with a serial encoding driven by a retinotopic activation gradient that is strongest over the initial letters (i.e., in the RH). Later on (300+ ms ) increased length leads to decreased amplitude from left posterior cortex. This reduced signal is consistent with the claim of faster lexical settling for longer words. Thus the timing, direction, and location of these effects are consistent with the proposal that longer words cause increased processing time at the letter level, followed by decreased settling time at the lexical level.
A recent fMRI study (Yarkoni et al., in press), has yielded even stronger evidence for the proposed settling function. They had subjects perform text reading under RSVP, and used regression analyses to determine effects of different variables in different brain regions. For the VWFA, they found that the effect of length had a quadratic component. The fitted function decreased across lengths 2 to 7, was fairly flat across lengths 7 to 10, and increased across lengths 10 to 13 (as shown in Fig. 5 of Yarkoni et al.). Thus, the observed effect of length in the VWFA across lengths 2 to 10 is very similar to the proposed settling function above. For very long words (> 10 letters), there is a mismatch between the two functions (i.e., increasing for the quadratic fit vs. flat for the proposed settling function), but the experimental estimate may be unreliable due to the relatively small number of very long words, coupled with the requirement of a quadratic fit.
I'd like to make one other point about the Yarkoni et al. article. In another analysis, they seek to discover whether the encoding in the VWFA is purely orthographic or whether it includes phonological information. They find an effect of phonological neighborhood-size that cannot be reduced to an effect of orthographic neighborhood-size, and conclude that the encoding in the VWFA is partially phonological. However, I would suggest that more precision is required in the statement of the issues and the interpretation of the results.
Due to interactivity between brain regions, an area that initially encodes only orthographic information could later be affected by feedback from a phonological area. Thus the VWFA encoding could well be purely orthographic during initial feedforward processing, and then VWFA activity could be affected later by phonological attributes. Thus the real question is whether the encoding in the VWFA is initially purely orthographic, not whether VWFA activity is ever influenced by phonological variables. The real question cannot be answered by fMRI, due to lack of temporal precision.
Thursday, July 31, 2008
Wednesday, July 23, 2008
Recent articles by Wimmer & colleagues, and Pitchford & colleagues
In Hawelka and Wimmer (2008, Vision Research), young adult dyslexics and controls performed letter search on 5-letter strings. The target letter appeared prior to the string, and remained visible when the string appeared. Dependent variable was reaction time for detecting a present target. The authors found that the dyslexics were actually faster than the controls (with the same high accuracy) and concluded that "the slow reading speed of German dyslexic readers cannot be traced to inefficient visual processing of letter strings".
However, I would suggest that this conclusion is unwarranted. The task of detecting a letter within a string differs from the visual processing required for reading, where automatic encoding of all of the letters' positions within the string is necessary. Just because dyslexics are as fast as controls at detecting a letter target does not mean that they encode letter order in a normal manner. In fact, when processing requires fast automatic encoding of letter position across the entire string, dyslexics are notably impaired, as found by Hawelka et al. (2005; 2006, Vision Research), Enns, Bryson & Roes (1995, Can Jour of Exp Psych.) and various studies by Valdois and colleagues.
It is of interest to look at the RT patterns of the dyslexics vs controls in this letter search paradigm. Pitchway, Ledgeway and Masterson (in press, QJEP) did so for adult English dyslexias. They found an LVF advantage for controls, but not dyslexics. A similar pattern is also evident in Hawelka and Wimmer's (2008) data - numerically, controls were faster on position 2 than 4, but dyslexics were not. These patterns are consistent with my idea that normal readers perform rapid serial processing of letters of sub-parts of a single object (the string), whereas dyslexics process letters in parallel as individual objects.
The length of the string in these experiments (5 letters) is near the limit (~4) for the number of visual objects that can be processed in parallel. Thus dyslexics do not show increased RTs in the letter search task because they can process the five letters of the string mostly in parallel, but they do show a different RT pattern due to this parallel processing. For longer strings, the difference between the two styles of processing has stronger implications, because the rapid serial processing (at 10-15 ms/letter) allows ~10 letters to be processed per fixation, whereas parallel processing in highly-compensated adult dyslexics is probably restricted to 5 letters max, due to innate limitations on the visual systems' ability to process multiple objects in parallel. This accounts for the slow reading that is characteristic of dyslexic readers in transparent orthographies. (In English, dyslexics would have the same visual limit. Due to the irregularity, they may adopt the approach of processing only the salient letters, and guessing at the word. This yields faster, less accurate reading.)
Bergmann and Wimmer (in press, Cog. Neuropsychology) then examined performance of German dyslexics versus controls on lexical decision (LD) versus pseudohomophone decision (PD). (In PD, the answer is "yes" if the pronunciation of a pseudoword is a word, e.g., yes for "taksi", no for "tazi"). Looking at accuracy, they found that dyslexics were only slightly impaired (with respect to controls) for PD, but were highly impaired on LD. In fact, controls were better at LD than PD, while dyslexics were better at PD than LD. For RT, dyslexics were considerably slower than controls on both tasks.
This provides yet more evidence that, universally, the characteristic pattern of dyslexia is a limitation in the uptake of orthographic information, rather than a phonological deficit. However, predicated on their presumption that there is no deficit in the dyslexics' visual processing of strings, the authors place the dyslexics' deficits in three places: poor representations of orthographic word forms, slow connections between orthographic word forms and phonological word forms and slow connections between graphemes and phonemes.
But the data are explained more compactly via the proposal of abnormal, parallel encoding of letters as individual objects. This limits the number of letters that can be processed within a fixation. Furthermore, parallel processing probably also slows down grapheme-phoneme mapping within a fixation, as such translation likely functions more automatically under seriality. Both of these factors yield increased RTs. The parallel processing also prohibits the encoding of a string as a single object, which precludes normal representation of orthographic word forms, yielding poor LD performance.
However, I would suggest that this conclusion is unwarranted. The task of detecting a letter within a string differs from the visual processing required for reading, where automatic encoding of all of the letters' positions within the string is necessary. Just because dyslexics are as fast as controls at detecting a letter target does not mean that they encode letter order in a normal manner. In fact, when processing requires fast automatic encoding of letter position across the entire string, dyslexics are notably impaired, as found by Hawelka et al. (2005; 2006, Vision Research), Enns, Bryson & Roes (1995, Can Jour of Exp Psych.) and various studies by Valdois and colleagues.
It is of interest to look at the RT patterns of the dyslexics vs controls in this letter search paradigm. Pitchway, Ledgeway and Masterson (in press, QJEP) did so for adult English dyslexias. They found an LVF advantage for controls, but not dyslexics. A similar pattern is also evident in Hawelka and Wimmer's (2008) data - numerically, controls were faster on position 2 than 4, but dyslexics were not. These patterns are consistent with my idea that normal readers perform rapid serial processing of letters of sub-parts of a single object (the string), whereas dyslexics process letters in parallel as individual objects.
The length of the string in these experiments (5 letters) is near the limit (~4) for the number of visual objects that can be processed in parallel. Thus dyslexics do not show increased RTs in the letter search task because they can process the five letters of the string mostly in parallel, but they do show a different RT pattern due to this parallel processing. For longer strings, the difference between the two styles of processing has stronger implications, because the rapid serial processing (at 10-15 ms/letter) allows ~10 letters to be processed per fixation, whereas parallel processing in highly-compensated adult dyslexics is probably restricted to 5 letters max, due to innate limitations on the visual systems' ability to process multiple objects in parallel. This accounts for the slow reading that is characteristic of dyslexic readers in transparent orthographies. (In English, dyslexics would have the same visual limit. Due to the irregularity, they may adopt the approach of processing only the salient letters, and guessing at the word. This yields faster, less accurate reading.)
Bergmann and Wimmer (in press, Cog. Neuropsychology) then examined performance of German dyslexics versus controls on lexical decision (LD) versus pseudohomophone decision (PD). (In PD, the answer is "yes" if the pronunciation of a pseudoword is a word, e.g., yes for "taksi", no for "tazi"). Looking at accuracy, they found that dyslexics were only slightly impaired (with respect to controls) for PD, but were highly impaired on LD. In fact, controls were better at LD than PD, while dyslexics were better at PD than LD. For RT, dyslexics were considerably slower than controls on both tasks.
This provides yet more evidence that, universally, the characteristic pattern of dyslexia is a limitation in the uptake of orthographic information, rather than a phonological deficit. However, predicated on their presumption that there is no deficit in the dyslexics' visual processing of strings, the authors place the dyslexics' deficits in three places: poor representations of orthographic word forms, slow connections between orthographic word forms and phonological word forms and slow connections between graphemes and phonemes.
But the data are explained more compactly via the proposal of abnormal, parallel encoding of letters as individual objects. This limits the number of letters that can be processed within a fixation. Furthermore, parallel processing probably also slows down grapheme-phoneme mapping within a fixation, as such translation likely functions more automatically under seriality. Both of these factors yield increased RTs. The parallel processing also prohibits the encoding of a string as a single object, which precludes normal representation of orthographic word forms, yielding poor LD performance.
Monday, July 21, 2008
Share (2008) in Psychological Bulletin
David Share has written a wonderful article entitled On the Anglocentricities of Current Reading Research and Practice: The Perils of Overreliance on an "Outlier" Orthography. The title says it all.
One issue that Share addresses is how we should conceive of multiple processing routes in reading. The standard division is on the lexical/sub-lexical dimension. He suggests that the important distinction is the learned/novel dimension (fluent/non-fluent). However, I would suggest that there really are two different processing routes (ventral, dorsal) and the proper distinction is the nature of visual/orthographic analysis. The high-level ventral orthographic representation (open-bigrams) is parasitic upon parts-based object recognition, and does not encode phonological information. The high-level dorsal orthographic representation is parasitic upon speechreading and encodes graphosyllablic information (i.e., letters grouped into onsets, vowels and codas). Both routes encode lexical information (i.e. there are connections from open-bigrams to lexical items, and connections from graphosyllables to lexical items), and both routes are activated by all letter strings.
Share stresses the importance of understanding how fluency arises. I would suggest that in order to do this, we must understand the nature of orthographic processing in exquisite detail; if orthographic processing qualitatively differs from normal, I think that fluency is not possible.
One issue that Share addresses is how we should conceive of multiple processing routes in reading. The standard division is on the lexical/sub-lexical dimension. He suggests that the important distinction is the learned/novel dimension (fluent/non-fluent). However, I would suggest that there really are two different processing routes (ventral, dorsal) and the proper distinction is the nature of visual/orthographic analysis. The high-level ventral orthographic representation (open-bigrams) is parasitic upon parts-based object recognition, and does not encode phonological information. The high-level dorsal orthographic representation is parasitic upon speechreading and encodes graphosyllablic information (i.e., letters grouped into onsets, vowels and codas). Both routes encode lexical information (i.e. there are connections from open-bigrams to lexical items, and connections from graphosyllables to lexical items), and both routes are activated by all letter strings.
Share stresses the importance of understanding how fluency arises. I would suggest that in order to do this, we must understand the nature of orthographic processing in exquisite detail; if orthographic processing qualitatively differs from normal, I think that fluency is not possible.
Thursday, July 17, 2008
Greek, RT patterns, and the final-letter advantage.
Nikki Pitchford, her student Maria Ktori, Tim Ledgeway and Jackie Masterson have been looking at reaction-time patterns for letter search. In this task, a target letter is presented, and then a random string of 5 letters is displayed, and the subject specifies whether the target letter is in the string. The string remains displayed until the response. Looking at reaction times for positive trials as a function of the position of the target, English readers show initial- and final-letter advantages; that is, a target in the first position is detected more quickly than one in the second position and a target in the fifth position is detected more quickly than one in the fourth position. However Greek readers presented with Greek stimuli show an initial-letter advantage, but not a final-letter advantage. This is true for children and adults.
Now the final-letter advantage is one of my favorite topics, and I've noted that the final-letter advantage is not present in English for exposure durations < 100 ms. My explanation has been that the final letter is not reached for exposures of < 100 ms (due to seriality); at longer exposures, the final letter is activated and can fire for an extended period because it is not inhibited by a subsequent letter, creating a final-letter advantage. So this advantage is taken to occur at the letter level.
However, this explanation is inconsistent with the Greek data. If the final-letter advantage occurs at the letter level, it should be present for Greek, but it is not. This caused me to think that perhaps the initial- and final-letter advantages actually arise an the open-bigram level. Since 2004, the SERIOL model has included edge bigrams, which encode the first and last letters. Recall that open-bigrams are taken to be specific to the ventral/visual route. Greek is a transparent language and there is evidence that transparent languages weight the dorsal/phonological route relatively more heavily than English. Thus ventral-route orthographic representations (i.e., open-bigrams) may play less of a role in Greek string processing than English string processing. If the final-letter advantage actually reflects activation of the final edge-bigram, this would explain why it is absent for Greek.
Note, however, that the original argument on temporal dependency still holds for English. For very brief exposures, the final letter is activated weakly or not at all, and so the final edge-bigram is weakly activated, so there is no final-letter advantage. At longer exposures, the final letter and the final edge-bigram are activated, and so there is a final-letter advantage.
The proposal that these letter effects occur at the bigram level also explains another aspect of their data. For English, they found that positional letter frequency influenced RTs at the first and final positions (i.e., faster RTs for letters more likely to occur at a given position), but there was no effect of positional frequency at the internal positions. Now, under the SERIOL model, the only position-specific representations are edge bigrams: letter units and non-edge open-bigrams are not position-specific. Thus the only positions at which position-specific letter effects could possibly occur are at the edges. If one assumes that frequency affects excitability of bigram units, and bigram excitability affects RTs, this then explains the effect of positional letter frequency at the exterior letters.
Now the final-letter advantage is one of my favorite topics, and I've noted that the final-letter advantage is not present in English for exposure durations < 100 ms. My explanation has been that the final letter is not reached for exposures of < 100 ms (due to seriality); at longer exposures, the final letter is activated and can fire for an extended period because it is not inhibited by a subsequent letter, creating a final-letter advantage. So this advantage is taken to occur at the letter level.
However, this explanation is inconsistent with the Greek data. If the final-letter advantage occurs at the letter level, it should be present for Greek, but it is not. This caused me to think that perhaps the initial- and final-letter advantages actually arise an the open-bigram level. Since 2004, the SERIOL model has included edge bigrams, which encode the first and last letters. Recall that open-bigrams are taken to be specific to the ventral/visual route. Greek is a transparent language and there is evidence that transparent languages weight the dorsal/phonological route relatively more heavily than English. Thus ventral-route orthographic representations (i.e., open-bigrams) may play less of a role in Greek string processing than English string processing. If the final-letter advantage actually reflects activation of the final edge-bigram, this would explain why it is absent for Greek.
Note, however, that the original argument on temporal dependency still holds for English. For very brief exposures, the final letter is activated weakly or not at all, and so the final edge-bigram is weakly activated, so there is no final-letter advantage. At longer exposures, the final letter and the final edge-bigram are activated, and so there is a final-letter advantage.
The proposal that these letter effects occur at the bigram level also explains another aspect of their data. For English, they found that positional letter frequency influenced RTs at the first and final positions (i.e., faster RTs for letters more likely to occur at a given position), but there was no effect of positional frequency at the internal positions. Now, under the SERIOL model, the only position-specific representations are edge bigrams: letter units and non-edge open-bigrams are not position-specific. Thus the only positions at which position-specific letter effects could possibly occur are at the edges. If one assumes that frequency affects excitability of bigram units, and bigram excitability affects RTs, this then explains the effect of positional letter frequency at the exterior letters.
SSSR meeting
Recently got back from the Society for the Scientific Study of Reading conference, where I chaired a symposium, with Nikki Pitchford and Daisy Powell, on orthographic learning. We were heartened that there seems to be an increasing openness to the importance of visual/orthographic processing in reading, and felt that the symposium was well received.
Nikki gave a talk on RT patterns for letter search in 5-letters arrays in English, English dyslexic, and Greek readers. Her Greek data have caused me to reconsider my explanation of the final-letter effect somewhat, which I'll address in a subsequent post.
Daisy discussed experiments with poor readers without phonological deficits but with Rapid Automatized Naming deficits. These subjects had poorer orthographic knowledge than controls in general, but actually out-performed controls on orthographic learning in Share's self-teaching task. (In this task, pseudowords are included in passages read by subjects, and the subjects are later tested on the spelling of the pseudowords.) These were four-letter pseudowords. It would be interesting to try the experiment with longer pseudowords, as I think that four letters can be processed in parallel by dyslexics, and whereas processing should particularly break down on longer words. So the poor readers may depend on visual information more than the controls, and be capable of remembering this visual information better than controls for strings up to four letters.
Sylviane presented longitudinal data showing that her Visual Attention Span measure (the number of letters than can be reported following brief presentation of five letters) is predictive of reading achievement. Sylviane and I are both interested in gaining a better understanding of whether this task measures a general deficit in the ability to distribute visual attention across multiple objects, or is more specific to learned orthographic processing. As I mentioned in a previous post, it may well measure both, and a deficit may arise at different levels in different subjects.
Piers wowed everyone with MEG data showing early (~100 ms post-target) phonological priming in IFG and precentral gyrus.
I harped on my favorite subject - perceptual patterns for identification of briefly presented strings, and suggested that the trigram identification task could be used to measure whether normal visual/orthographic processing has been learned. In particular, the SERIOL model predicts that, at a given eccentricity, increased letter position within a string should have a much larger detrimental effect on accuracy in the LVF than the RVF for normal readers. Thus they should show an VF asymmetry on the effect of string position. If normal string processing has not been learned, the pattern should be symmetric, with little effect of string position in either VF. Data from 1 seventh-grade dyslexic and 7 age-matched controls, from Dubois et al. (2007), support this proposal, as discussed in this post. Clearly "more research is required".
Nikki gave a talk on RT patterns for letter search in 5-letters arrays in English, English dyslexic, and Greek readers. Her Greek data have caused me to reconsider my explanation of the final-letter effect somewhat, which I'll address in a subsequent post.
Daisy discussed experiments with poor readers without phonological deficits but with Rapid Automatized Naming deficits. These subjects had poorer orthographic knowledge than controls in general, but actually out-performed controls on orthographic learning in Share's self-teaching task. (In this task, pseudowords are included in passages read by subjects, and the subjects are later tested on the spelling of the pseudowords.) These were four-letter pseudowords. It would be interesting to try the experiment with longer pseudowords, as I think that four letters can be processed in parallel by dyslexics, and whereas processing should particularly break down on longer words. So the poor readers may depend on visual information more than the controls, and be capable of remembering this visual information better than controls for strings up to four letters.
Sylviane presented longitudinal data showing that her Visual Attention Span measure (the number of letters than can be reported following brief presentation of five letters) is predictive of reading achievement. Sylviane and I are both interested in gaining a better understanding of whether this task measures a general deficit in the ability to distribute visual attention across multiple objects, or is more specific to learned orthographic processing. As I mentioned in a previous post, it may well measure both, and a deficit may arise at different levels in different subjects.
Piers wowed everyone with MEG data showing early (~100 ms post-target) phonological priming in IFG and precentral gyrus.
I harped on my favorite subject - perceptual patterns for identification of briefly presented strings, and suggested that the trigram identification task could be used to measure whether normal visual/orthographic processing has been learned. In particular, the SERIOL model predicts that, at a given eccentricity, increased letter position within a string should have a much larger detrimental effect on accuracy in the LVF than the RVF for normal readers. Thus they should show an VF asymmetry on the effect of string position. If normal string processing has not been learned, the pattern should be symmetric, with little effect of string position in either VF. Data from 1 seventh-grade dyslexic and 7 age-matched controls, from Dubois et al. (2007), support this proposal, as discussed in this post. Clearly "more research is required".
Wednesday, June 25, 2008
What's wrong with American scientists?
A friend just got back from a computational linguistics conference in the U.S. She commented that she and her colleague had noticed that the best talks came from European labs, and that Europeans seemed more open to exchange of ideas than American researchers. This observation precisely matches my own experience and opinions.
In general, it seems to me that American researchers have little interest in any ideas but their own. All of their energy is spent churning out papers to further their careers. I think that this is due to the extreme competitiveness of the funding situation here. It results in thrashing - scientists spend all of their time maneuvering to get funding, based on safe incremental changes to their previously funded work. Anything not directly related to funding for their own research is of no use; it is as if they have blinders on. As a result, cronyism rules; established researchers will only help other researchers if there's something in it for them. If a new researcher independently generates original ideas, it is impossible to get ahead based solely on the quality of those ideas.
That's why the U.S. is losing its pre-eminence in science and Europe is gaining, as shown by an NSF study of where the leading papers in a variety of fields are being generated.
In general, it seems to me that American researchers have little interest in any ideas but their own. All of their energy is spent churning out papers to further their careers. I think that this is due to the extreme competitiveness of the funding situation here. It results in thrashing - scientists spend all of their time maneuvering to get funding, based on safe incremental changes to their previously funded work. Anything not directly related to funding for their own research is of no use; it is as if they have blinders on. As a result, cronyism rules; established researchers will only help other researchers if there's something in it for them. If a new researcher independently generates original ideas, it is impossible to get ahead based solely on the quality of those ideas.
That's why the U.S. is losing its pre-eminence in science and Europe is gaining, as shown by an NSF study of where the leading papers in a variety of fields are being generated.
Work from Eran Zaidel's lab
Zaidel's lab has done a series of interesting experiments on hemifield lexical decision. The following findings seem particularly intruiging. (1) In English, LVF performance is worse than RVF performance for acceptance of words, but performance for rejection of pseudowords is equivalent across VFs. In Hebrew, this interaction is not present. (2) Reading ability (vocabulary and comprehension) is correlated with LVF/RH measures in English, but not Hebrew. To explain these phenomena, the authors suggest that lexical processing is more left-lateralized in Hebrew. I'd like to suggest an alternative account of these findings.
Consistent with imaging evidence for left lateralization of the VWFA, I assume that LVF/RH stimuli are projected to the LH pre-lexically, and that VF differences therefore arise at a pre-lexical, orthographic level. Recall that the SERIOL model posits a monotonically-decreasing activation gradient. In left-to-right languages, formation of the activation gradient requires learned visual processing in the RH in order to invert the acuity gradient into the activation gradient. In right-to-left languages, this learned processing would occur in the LH. The process of inverting the acuity gradient is especially costly in left-to-right languages because it is coupled with callosal transfer to the LH.
For words presented to the LVF/RH in English, incomplete inversion of the acuity gradient would lead to a non-optimal encoding of letter order, especially for longer words. This would tend to make words look unfamiliar, thus there would be an impact for accuracy on words, but not pseudowords, creating the observed interaction. More frequent readers would gain reading expertise in vocabulary, comprehension, and learned string-specific processing in RH visual areas. Thus the correlation between reading ability and LVF measures. In Hebrew, acuity-gradient inversion is not required in the LVF/RH, so LVF measures are not correlated with reading ability. Similarly, the word/pseudoword interaction is not present. (You don't see the opposite VF pattern because acuity-gradient inversion in the LH is more efficient than in the RH because it is not combined with callosal transfer.)
So I suggest that their findings indicate that specialized visual processing of strings is more right-lateralized in English and more left-lateralized in Hebrew. This idea of RH specialization for early visual string processing in English is consistent with the results of an EEG study, which showed that a length effect was right-lateralized initiallly (at 90 ms) and then became left-lateralized (at 200 ms) (Hauk, Davis, Ford, Pulvermuller & Marslen-Wilson, 2006).
Consistent with imaging evidence for left lateralization of the VWFA, I assume that LVF/RH stimuli are projected to the LH pre-lexically, and that VF differences therefore arise at a pre-lexical, orthographic level. Recall that the SERIOL model posits a monotonically-decreasing activation gradient. In left-to-right languages, formation of the activation gradient requires learned visual processing in the RH in order to invert the acuity gradient into the activation gradient. In right-to-left languages, this learned processing would occur in the LH. The process of inverting the acuity gradient is especially costly in left-to-right languages because it is coupled with callosal transfer to the LH.
For words presented to the LVF/RH in English, incomplete inversion of the acuity gradient would lead to a non-optimal encoding of letter order, especially for longer words. This would tend to make words look unfamiliar, thus there would be an impact for accuracy on words, but not pseudowords, creating the observed interaction. More frequent readers would gain reading expertise in vocabulary, comprehension, and learned string-specific processing in RH visual areas. Thus the correlation between reading ability and LVF measures. In Hebrew, acuity-gradient inversion is not required in the LVF/RH, so LVF measures are not correlated with reading ability. Similarly, the word/pseudoword interaction is not present. (You don't see the opposite VF pattern because acuity-gradient inversion in the LH is more efficient than in the RH because it is not combined with callosal transfer.)
So I suggest that their findings indicate that specialized visual processing of strings is more right-lateralized in English and more left-lateralized in Hebrew. This idea of RH specialization for early visual string processing in English is consistent with the results of an EEG study, which showed that a length effect was right-lateralized initiallly (at 90 ms) and then became left-lateralized (at 200 ms) (Hauk, Davis, Ford, Pulvermuller & Marslen-Wilson, 2006).
Subscribe to:
Posts (Atom)