Much of contemporary statistical practice consists of using the methods of hypothesis testing, estimation, and confidence intervals in order to represent and interpret the evidence in a given set of observations. These same methods are used for other purposes as well, but here we are concerned only with their role in interpreting observed data as evidence, as typified by their conventional use in research reports in scientific journals. In particular, we are concerned with the rationale behind such applications. The most widely taught statistical theory, which is based on a paradigm of Neyman and Pearson (1933), explicitly views these statistical methods as solutions to problems of a different kind, so that these evidential applications fall outside the scope of that theory. In this chapter we describe the Neyman–Pearson theory and look at problems that arise when its results are used for interpreting data as evidence.