Aero-tactile integration in speech perception

That’s the title of a recent article (November 26) in Nature by Bryan Gick and Donald Derrick (Linguistics, Univ. of British Columbia), reported by science journalists around that time: for example, on National Public Radio; in New Scientist; in the New York Times; and in Scientific American.

The main result is that the perception of initial voiceless stops p and t (which in English are aspirated) is improved when a slight puff of air on a listener’s skin accompanies the production (and the perception of the voiced stops b and d is confounded by such an accompanying puff of air). That is, tactile information is integrated with auditory information in speech perception.

There are long-known parallels having to do with integrating visual information with auditory information in speech perception: being able to see articulations improves speech perception in noisy environments, and if the two sorts of information are at odds, perception is confounded (in the McGurk Effect).

These cross-modal interactions are consistent with some form of the motor theory of speech perception, which holds that speech perception is guided by identifying the vocal tract gestures involved in speech production — a hypothesis that gets some support from the discovery of mirror neurons, which respond both to performing an action and to observing the action being performed by others.

(Comments are off on this one because it’s significantly out of my field of expertise, so I’m mostly just providing links.)


%d bloggers like this: