Audiovisual phonological fusion (AVPF) is a recently discovered perceptual phenomenon in speech perception in which visual information (i.e., back) combines with auditory information (i.e., lack) to create a fused percept (i.e., black) (Radicke, 2007). The current study investigated the effects of temporal asynchrony on perception of AVPF. Subjects were presented with stimuli that differed in the amount of temporal offset ranging from 300 ms of auditory lead to 500 ms of visual lead and were asked to perform two tasks. In the fusion task, subjects were asked to report what they thought the speaker said. In the asynchrony judgment task, subjects were asked to determine whether the auditory and visual portions occurred at the same time (“in sync”) or at different times (“out of sync.”). The stimuli presented in both tasks were the same, but the ordering of the tasks was manipulated to test whether completing either task first would affect performance on the other. We found that (1) AVPF was moderately robust to temporal asynchrony; (2) synchrony judgments were robust for AVPF stimuli; and (3) the ordering of the tasks can modulate performance, at least for the asynchrony judgment task. Implications for current theories of audiovisual integration are discussed.
Download Full PDF Version (Non-Commercial Use)