In this chapter we discuss the considerations and processes of validating automated scoring systems including validity, validation, and evaluation. We start with a high-level overview of the considerations and the aspects of validations and then describe the processes of validation of automated scoring systems. This spans the system architecture, linguistic features that are typically extracted using an automated text or speech scoring system, sampling for the training and evaluation samples, scoring engine models, scoring engine evaluations, evaluation criteria, population subgroup evaluations, external criteria prediction, and gaming. In addition, we also discuss evaluation examples of automated scoring systems such as essay scoring, speech scoring, and intelligent tutoring systems. Finally, we conclude with discussions on the readiness of automated scoring systems and implications for the field.