Wednesday, June 25, 2008

What's wrong with American scientists?

A friend just got back from a computational linguistics conference in the U.S. She commented that she and her colleague had noticed that the best talks came from European labs, and that Europeans seemed more open to exchange of ideas than American researchers. This observation precisely matches my own experience and opinions.

In general, it seems to me that American researchers have little interest in any ideas but their own. All of their energy is spent churning out papers to further their careers. I think that this is due to the extreme competitiveness of the funding situation here. It results in thrashing - scientists spend all of their time maneuvering to get funding, based on safe incremental changes to their previously funded work. Anything not directly related to funding for their own research is of no use; it is as if they have blinders on. As a result, cronyism rules; established researchers will only help other researchers if there's something in it for them. If a new researcher independently generates original ideas, it is impossible to get ahead based solely on the quality of those ideas.

That's why the U.S. is losing its pre-eminence in science and Europe is gaining, as shown by an NSF study of where the leading papers in a variety of fields are being generated.

Work from Eran Zaidel's lab

Zaidel's lab has done a series of interesting experiments on hemifield lexical decision. The following findings seem particularly intruiging. (1) In English, LVF performance is worse than RVF performance for acceptance of words, but performance for rejection of pseudowords is equivalent across VFs. In Hebrew, this interaction is not present. (2) Reading ability (vocabulary and comprehension) is correlated with LVF/RH measures in English, but not Hebrew. To explain these phenomena, the authors suggest that lexical processing is more left-lateralized in Hebrew. I'd like to suggest an alternative account of these findings.

Consistent with imaging evidence for left lateralization of the VWFA, I assume that LVF/RH stimuli are projected to the LH pre-lexically, and that VF differences therefore arise at a pre-lexical, orthographic level. Recall that the SERIOL model posits a monotonically-decreasing activation gradient. In left-to-right languages, formation of the activation gradient requires learned visual processing in the RH in order to invert the acuity gradient into the activation gradient. In right-to-left languages, this learned processing would occur in the LH. The process of inverting the acuity gradient is especially costly in left-to-right languages because it is coupled with callosal transfer to the LH.

For words presented to the LVF/RH in English, incomplete inversion of the acuity gradient would lead to a non-optimal encoding of letter order, especially for longer words. This would tend to make words look unfamiliar, thus there would be an impact for accuracy on words, but not pseudowords, creating the observed interaction. More frequent readers would gain reading expertise in vocabulary, comprehension, and learned string-specific processing in RH visual areas. Thus the correlation between reading ability and LVF measures. In Hebrew, acuity-gradient inversion is not required in the LVF/RH, so LVF measures are not correlated with reading ability. Similarly, the word/pseudoword interaction is not present. (You don't see the opposite VF pattern because acuity-gradient inversion in the LH is more efficient than in the RH because it is not combined with callosal transfer.)

So I suggest that their findings indicate that specialized visual processing of strings is more right-lateralized in English and more left-lateralized in Hebrew. This idea of RH specialization for early visual string processing in English is consistent with the results of an EEG study, which showed that a length effect was right-lateralized initiallly (at 90 ms) and then became left-lateralized (at 200 ms) (Hauk, Davis, Ford, Pulvermuller & Marslen-Wilson, 2006).


Thursday, June 5, 2008

Dubois et al. (2007) in Cognitive Psychology

The authors look at perceptual patterns for a seventh-grade French dyslexic, MT, who has no phonological deficit. In particular, they look at trigram identification across a range of retinal locations (centered from -7 to 7), for MT versus seven age-matched controls. The authors fit curves to the trigram data, and did not find any difference between MT and the controls.

However, the SERIOL model makes quite specific predictions of how perceptual patterns should differ between dyslexics and controls, which the authors did not evaluate. The model predicts that a letter's position within the string should have a much stronger influence in the LVF than the RVF. This is due to the proposal of learned left-to-right inhibition in the LVF/RH. For younger readers, this effect should be strongest near fixation, where perceptual learning is the strongest. For example, accuracy for a letter at retinal location -2 should be much better when it is the 1st letter in the string than when it is the 3rd letter. In contrast, accuracy for a letter at retinal location 2 should be minimally affected by its position within the string. This asymmetry should be a signature of normal visual/orthographic processing, and it should be absent for dyslexics, under the assumption that they are not performing normal visual processing.


Indeed, inspection of the data in Figure 6, shown above, supports this prediction. In this figure, a filled circle represents the 1st letter in the LVF and the 3rd letter in the RVF. Conversely, an open circle represents the 3rd letter in the LVF and 1st letter in the RVF. For controls for eccentricities of 1 to 3 letter widths, it is evident that string position had a strong effect in the LVF, but not the RVF, while the pattern was symmetric for MT. Examination of the individual data shows that the asymmetric pattern held at the individual level.

Of course, this is a very small sample size. I would suggest that it is important to try this experiment on a large group of school-age controls and dyslexics to see how diagnostic this asymmetric vs. symmetric pattern truly is. If it is highly diagnostic, this would be quite informative as to the nature of core deficits in developmental dyslexia.

Tuesday, June 3, 2008

Overlap model - Gomez, Ratcliff & Perea (in press) in Psychological Review

In this paper, the authors introduce the Overlap Model of letter-position encoding, which is essentially a sloppy slot-based model. For example, an L in the third position would activate an encoding for L in the third position, as well L in the second and fourth positions, to a lesser degree. The authors present a series of experiments to determine parameter settings for the model, which govern the amount of spread for each letter position. In the experiments, a five-letter string was presented for 60ms, followed by a mask and 2AFC. The choices were strings, and the way in which the distractor differed from the target was systematically varied. Note also, that the choices were presented well below where the string occurred, so they did not act as an additional backward mask.

From my point of view, the most interesting aspect of this paper is the finding that the final letter was the least well recognized and localized. In contrast, at longer exposures (>= 100 ms), the usual final-letter advantage can be observed. This contrast is consistent with serial processing and the resulting account of the final-letter advantage, and is difficult to explain otherwise.

However, as a model of letter-position encoding, the Overlap Model faces some difficulties.
  1. It is not a full model, as it does not explain how the positions are computed. How is the retinotopic representation transformed into a string-centered positional representation?
  2. It cannot explain the finding that, for nine-letter words, the prime 6789 provides facilitation (Grainger et al., 2006 in JEP:HPP). Even with a sloppy position encoding, there would be too much difference between the letter's positions in the prime and target to provide any overlap. Nor could the model be modified to include a position encoding anchored at the final letter; their experiments do not support the existence of such an encoding, as the final letter was the least well anchored/localized (in contrast to the initial letter, which was the best anchored/localized).

Martin et al. (2007) in Brain Research

In this study, the subjects performed the Reicher-Wheeler task on five-letter words versus unpronounceable nonwords, for exposure durations of 50 and 66 ms. The stimuli were presented so that the target letter always occured at fixation. So for example, when the target was the second letter, the second letter appeared at fixation, putting the first letter in the LVF and the third to fifth letters in the RVF. Thus the retinal location and visual acuity of the target letter did not vary with its position within the string. The task was performed by adult unimpaired readers and dyslexics.

This provides an opportunity to look at how accuracy interacts with string position and exposure duration. First we consider unimpaired readers. Under the assumption of serial processing, some letters may be read out before the mask occurs, and others will be read out after the mask occurs. The latter letters should be at a disadvantage. In general, the SERIOL model predicts that an increase in exposure duration should have the strongest effect at string positions in the transition zone (i.e., letters that were formerly read out after the mask occurred, but now are read out before the mask.) In this experiment, the change in exposure duration was 16ms, which is on the time scale proposed for per-letter processing. So at first glance, this suggests that early string positions should not be affected by an increase in exposure (because they are read out before the mask in any case) and later string positions should also not be affected (because they are read out after the mask in any case), while a transitional position should be affected. Here are the results from the experiment:

The nonword results for control subjects show an asymmetric effect of increased exposure, with the largest improvement for position 1, and no improvement at positions 4 and 5. This pattern is difficult to explain under parallel processing, but does not exactly match the SERIOL intuition that the improvement should be localized at the position that was not read out at 50 ms, but was read out at 66ms.

However, let's consider the mechanics in more detail. Due to strong bottom-up activation in the LVF/RH, an increase in string position at fixation will not necessarily cause that letter to be read out (at the letter level) a full "time slot" (~15 ms) later, because the additional LVF/RH letters in earlier positions can "fill in" earlier time periods. That is, at the feature level, an initial letter at -1 reaches a higher activation than an initial letter at 0 (fixation). Hence, for a letter at fixation, activations (at the feature level) are similar for position 1 versus position 2. Therefore the timing of activation at the letter level does not vary much either. This is illustrated in the following figure, which shows the proposed time that a letter starts firing at the letter level, based on its retinal location and string position. It shows how each increase in string position from 1 to 3 at fixation could delay firing by ~5 ms, rather than ~15 ms. There is a much larger difference going from position 3 to 4 under the assumption that an initial letter at -3 is too far from fixation to reach maximal activation at the feature level; the reduced activation level then percolates through the string, due to RH-left-to-right and cross-hemispheric lateral inhibition.


Under this account, for the 50 ms exposure, the letter at fixation does not fire before the mask appears. For a 67 ms exposure, the letter at fixation can start to fire before the mask when it is in positions 1, 2, or 3. This explains the observed interaction of increased exposure with string position. (However, this doesn't explain why having the third letter at fixation yields the poorest results overall. This may be due to greater positional uncertainty about the middle letter.)

It is also interesting that there was no or a very weak initial-letter advantage in the nonword data. This is consistent with the idea that the initial-letter advantage is essentially a LVF non-initial-letter disadvantage. That is, when a second letter falls in the LVF, it receives much more additional lateral inhibition (at the feature level) than when it is the first letter in the LVF. In contrast, when a second-letter falls at fixation, it only receives slightly more lateral inhibition than when it is the first letter. Thus the advantage for being the first letter is much reduced at fixation, compared to retinal locations in the LVF. In contrast, Tydat and Grainger (in press, JEP:HPP) claim that the initial-letter advantage is due to reduced receptive-field sizes for letters, such that a letter receives considerably less inhibition with 1 immediate flanker than with 2 flankers. This account incorrectly predicts that an initial-letter advantage should be present at fixation.

Note that the best overall firing patterns are obtained when fixation falls on the second or third letter. That is, these conditions allow the earliest completion of letter readout. This explains the OVP effect observed in the word conditions. Thus the word and nonword conditions yield different patterns, with fixation on the third letter yielding the poorest results for nonwords, but the best results for words. This is because accuracy in the word condition is influenced by the processing of the entire string (to yield lexical activation), which is best at positions 2 and 3, while accuracy in the nonword condition is influenced primarily by the processing and localization of the target letter.

It is also interesting to see that the dyslexics showed a different pattern. First, there was no position X exposure-duration interaction. This is consistent with my proposal that dyslexics process letters in parallel, while unimpaired readers process them serially. Secondly, there was no word-superiority effect, except when fixation fell on the third letter. This may indicate that these dyslexics use a retinotopic method to encode letter position, which is keyed to having two letters in the LVF. When the presentation condition matches this requirement, lexical representations are well activated; otherwise they are not. This is consistent with the case study of a single French dyslexic (Dubois et al., 2007, Cognitive Neuropsychology), which showed that lexical recognition was best when fixation fell on the third letter, independently of string length.

Monday, June 2, 2008

Shalev, Mevorach & Humphreys (in press) in Neuropsychologia

The authors investigate the deficits of two patients with parietal lesions. They find that the patients have a selective deficit in encoding the order of letters in a string, with spared ability to identify letter identities. For example, the patients are much more likely to make false positive responses (in lexical decision) to nonwords formed by transposing letters of words than to nonwords formed by replacing letters. In contrast, a patient with a left occipitotemporal lesion did not show this pattern.

The authors conclude that letter identity and position are encoded separately. Suprisingly, in the ensuing discussion of models of orthographic encoding, they do not reference any of the recent developments in this area; the most recent reference is to 2001 paper on the dual-route model.

I would explain their results as follows. In my article on alexia, I propose that the serial letter representation is transformed into two different high-level orthographic representations - an open-bigram encoding on the ventral (occipitotemporal) route, and a graphosyllabic encoding on the dorsal (occipito-parieto-frontal) route. The latter would provide a more robust encoding of letter order, as open-bigrams introduce ambiguity. If the dorsal graphosyllabic encoding is abolished, the result should be a less robust encoding of letter order in lexical processing, leading to a decreased sensitivity to transpositions. This is exactly what is observed in these parietal patients.

Burgund & Edwards (2008) in Neuroreport

In this fMRI study of letter priming in the VWFA, the authors compared same-identity and different-identity primes that both had moderate visual similarity to the target. They found no advantage for the same-identity primes, and concluded that the VWFA does not employ abstract letter representations.

However, the task that the subjects performed was based on a visual attribute - whether the letter had an enclosed space. It is perhaps not surprising then that priming was determined by the visual similarity between the prime and the target. It is quite possible that a task based on letter identity would show a different pattern of results.

Also, there is some evidence (e.g., studies by Gauthier and colleagues) that single letters are processed differently than strings. So studies using single-letter tasks may not tap into the abstract letter representations used for string processing.

Work by Sylviane Valdois

In March, I had a marvelous visit to Sylviane's lab in Grenoble. She does very interesting work on string processing in dyslexia, and has shown that dyslexics are able to report fewer letters per fixation than unimpaired readers. We both agree that this is related to a deficit in the ability to allocate visual attention, although we have somewhat different interpretations. She thinks that dyslexics have a general deficit in the ability to allocate attention across multiple objects, whereas I think that dyslexics have a covert-attention deficit that interferes with letter-string processing in particular.

However, like many things in neuropsychology, there may be multiple contributing factors. Sylviane showed me some individual data that convinced me that some dyslexics do indeed have trouble distributing attention over more than one or two objects. I suspect that there are probably other subjects whose deficit is specific to letter strings.

Cohen et al. (2008) in Neuroimage

In this fMRI experiment, the authors presented words that were progressively degraded, under three manipulations:
  • Shifting the word into the LVF
  • Increasing the spacing between letters
  • Rotating the string
Within each manipulation, there were 5 levels of degradation, where level 1 was normal presentation, and level 5 was maximally degraded. For levels 4 and 5, the authors found a behavioral length effect under all three degradations. Parietal activity increased from level 1 to level 5. The authors conclude that words are normally processed in parallel, while degradation causes attention-driven, serial processing.

I wrote a commentary on this article, but Neuroimage would not publish it. Briefly, the article points out that there are 3 main problems with their analysis.
  • If there's an abrupt shift in processing mode at the onset of the length effect (between levels 3 and 4), parietal activity should show a large jump between levels 3 and 4. However within each manipulation, parietal activity was similar for levels 3 and 4.
  • Attention-driven processing cannot explain the time scale of the length effect, which was ~20 ms/letter, as it's been shown that serial covert shifts of attention take at least 300 ms per shift.
  • The authors cannot explain the results of Whitney & Lavidor (2004), who showed that the LVF length effect can be abolished via a contrast manipulation.
Furthermore, it is straightforward to explain their results under the SERIOL model. As I previously proposed in email to Andy Ellis, degradation would interfere with automatic bottom-up formation of the locational gradient. Therefore, top-down attention is recruited to form the activation gradient. This yields a less finely tuned (steeper) gradient than normal, and a length effect emerges from the usual serial processing. See the commentary for details.