ABSTRACT

This entry covers the past and present development of connectionism in three sections: “Roots”, “Revolution”, and “Radiation”. Roots summarizes the first efforts to assemble neuron-like elements into network models of cognitive function. It includes McCulloch and Pitts’ demonstration that their nets can compute any logical function, Rosenblatt’s perceptrons, and Hebb’s learning rule. The section ends with Minsky and Papert’s famous complaint that perceptrons and other simple architectures cannot calculate certain Boolean functions. This sets the stage for a discussion of implementational vs radical interpretations of connectionist modeling. The second section describes the innovations that led to the PDP revolution of the 1980s, including sigmoidal and other activation functions, backpropagation, multi-level nets, and the introduction of simple recurrence. The section explains the new enthusiasm for these models, especially for the power of distributed representations that they employ, and ends with problems that influenced further developments, including the biological implausibility of backpropagation and scaling problems like catastrophic interference. The third section, “Radiation”, examines a variety of more recent advances since the PDP heyday. These include multi-network architectures, biologically inspired alternatives to backpropagation, simulations of the effects of neuromodulators, deep convolutional networks, and comparisons between deep learning and Bayesian approaches.