Humans possess a natural ability to recognize spatial and temporal patterns in the world around them by processing signals gathered through their senses. A long-standing goal of artificial intelligence is to duplicate this ability with man-made sensors and computing devices. In practice, however, the goal of duplicating human ability often gives way to the goal of optimizing performance for the available sensing and computing resources. Furthermore, humans are notoriously bad at certain types of pattern recognition problems where man-made sensors and computers excel (e.g., detecting subtle changes). Practical pattern recognition problems, where the goal is to optimize performance, are ubiquitous in the modern world as witnessed by this highly abbreviated list of diverse examples: speech recognition, fingerprint recognition, face recognition, iris recognition, optical character recognition, image target recognition, bar code recognition, computer network intrusion detection, nuclear event detection, medical disease detection, financial fraud detection, structural damage detection (e.g., for bridges, buildings, airplanes, and ships), and anomaly detection (e.g., for screening at airports, shipping ports, and manufacturing production lines). Although pattern recognition systems can be very complex, their basic operation is conceptually quite simple: First, convert the input data into patterns and then assign a class label to each pattern. This chapter focuses on the second step, but we start with a brief discussion of the first.