Major financial and professional investments are committed to outcome evaluations of new demonstration projects that are expected to produce useful information to inform policy, program, and practice for low-income families. The potential of evaluations to fulfill these expectations is often not realized because of the premature choice to do national studies with experimental designs. At the same time, limited investment in formative evaluations of local program demonstrations does not reflect their importance to program development and performance, which is essential as a foundation for large-scale experimental design studies. In the United States, too frequently, decisions about when and how to do outcome evaluations are based on politics rather than best practice in evaluation research. There is a rush to prove that the investment in new programs has worked with large-scale control group studies that are deemed the most reliable scientific method for evaluating the efficacy of new initiatives. A misconception about the right timing for such evaluations will be addressed in this chapter with suggestions for approaches based on current knowledge about program evaluation methodology and supported by lessons learned from experience in the field.