ABSTRACT

The study of human face processing has advanced considerably in recent years, from consisting of a collection of isolated empirical facts and anecdotal observations to a relatively coherent view of the complexity and diversity of the problems tackled by a human observer when confronted with a face. This rapid progress can be traced to the proposal of comprehensive theories of face processing (cf. Ellis, 1975, 1986; Hay and Young, 1982; Bruce and Young, 1986), which have provided a theoretical framework for investigating human face processing within functional subsystems. These models have had much to say about the kinds of tasks subserved by the human face processing system (e.g. naming faces, extracting visual categorical information such as sex and age, etc.), and about the co-ordination of processing among these tasks (e.g. Young et al., 1986). They have also provided important constraints for making sense of neuropsychological data on patients with various face processing deficits (e.g. Bruyer, 1986). Despite the success of these models in guiding research efforts into many aspects of human face processing, they have provided somewhat less guidance in understanding the immensely complicated problems solved by the perceptual system in extracting and representing the richness of the perceptual information available in human faces. In recent years, it has been primarily from computational models that the difficulty of this problem and its importance to understanding human face processing abilities has come to be appreciated.