ABSTRACT

Research has shown that expert decision makers often make decisions in their area of expertise that are superior to those of laypeople – for example, expert physicians are better at discriminating levels of cardiac risk (Reyna & Lloyd, 2006), chess masters can identify the most promising moves during a game of chess (Chase & Simon, 1973; De Groot, 1978), and judges (but not jurors) are able to distinguish between qualitatively different types of harm in a legal case (Eisenberg, Rachlinski, & Wells, 2002). However, research has also shown experts are fallible and susceptible to many of the cognitive biases that affect lay people (Tversky & Kahneman, 1974). For example, expert physicians make different choices based on whether the same information is presented in positive or negative terms (known as a framing effect) and fail to adjust sufficiently for population base rates when judging a conditional probability (e.g., the chances that a 40-year-old woman has breast cancer conditional on a positive diagnostic test) (e.g., Croskerry, in press; McNeil, Pauker, Sox, & Tversky, 1982; Reyna, 2005; Reyna & Lloyd, 2006; Shanteau & Stewart, 1992). In fact, research has shown that under certain circumstances experts can be more biased than novices in their area of expertise (Reyna, Chick, Corbin, & Hsia, 2014). In this chapter, we discuss decision making of experts including physicians, judges, and intelligence officers. Using the lens of fuzzy-trace theory (FTT), we provide a framework to explain why experts often make superior decisions, and when they are likely to be as susceptible or more susceptible to bias (systematic departures from applicable normative rational theory; Gilovich, Griffin, & Kahneman, 2002) than laypeople.