Tuesday, June 3, 2008

Overlap model - Gomez, Ratcliff & Perea (in press) in Psychological Review

In this paper, the authors introduce the Overlap Model of letter-position encoding, which is essentially a sloppy slot-based model. For example, an L in the third position would activate an encoding for L in the third position, as well L in the second and fourth positions, to a lesser degree. The authors present a series of experiments to determine parameter settings for the model, which govern the amount of spread for each letter position. In the experiments, a five-letter string was presented for 60ms, followed by a mask and 2AFC. The choices were strings, and the way in which the distractor differed from the target was systematically varied. Note also, that the choices were presented well below where the string occurred, so they did not act as an additional backward mask.

From my point of view, the most interesting aspect of this paper is the finding that the final letter was the least well recognized and localized. In contrast, at longer exposures (>= 100 ms), the usual final-letter advantage can be observed. This contrast is consistent with serial processing and the resulting account of the final-letter advantage, and is difficult to explain otherwise.

However, as a model of letter-position encoding, the Overlap Model faces some difficulties.
  1. It is not a full model, as it does not explain how the positions are computed. How is the retinotopic representation transformed into a string-centered positional representation?
  2. It cannot explain the finding that, for nine-letter words, the prime 6789 provides facilitation (Grainger et al., 2006 in JEP:HPP). Even with a sloppy position encoding, there would be too much difference between the letter's positions in the prime and target to provide any overlap. Nor could the model be modified to include a position encoding anchored at the final letter; their experiments do not support the existence of such an encoding, as the final letter was the least well anchored/localized (in contrast to the initial letter, which was the best anchored/localized).


jonathan grainger said...

Carol has correctly identified a major problem for the overlap model - the finding that the final four letters of a 9-letter word effectively prime that word (Grainger et al., 2006). It would be nice to read how the authors of the (in press) overlap model propose to account for a result published several years ago?

Ken Forster said...

Two points:

1. There is no priming when every pair of letters is transposed, e.g., isedawkl-SIDEWALK (Guerrera & Forster, LCP 2008). However other equally (seemingly) unrecognizable primes are quite effective (e.g., sdiwelak). If transposition simply weakens the activation of the target, I would have expected a sloppy-slot system to predict a reasonable amount of priming for the all-letters- transposed condition.

2. I would expect 6789 priming only when there are very few words that end with these letters. Also what would happen with reversed halves, e.g., "walkside" (6781234)? Has anyone tried that condition?

viagra online said...

I’m tired of reading blogs, without sense .My books are so boring so I will visit this blog more often.I’m tired of sports, so I love the grammatic and orography. All that running and scoring.I’m tired of sleeping.Who needs all that resting?I’m tired of laughing. I want no more jesting.

Viagra for sale said...

I think it has advantages and disadvantages because all these things depends on the way people are thinking, so it could be an interesting way to achieve what you want.