Cluster randomized trials (CRTs), also known as group randomized trials, have become increasingly common in certain areas of public health research, such as evaluation of lifestyle interventions, vaccine field trials, and studies of the impact of hospital guidelines on patient health. In such studies, the unit of randomization is a group of subjects instead of an independent individual, for example, hospital, clinical practice, household, or village. Compared with individually randomized studies, CRTs are generally more complex and require the investigators to consider issues such as possible lack of blinding, selection of the unit of randomization and the unit of inference, matching or stratification to improve treatment balance across clusters, and additional analytical challenges. It is also well known that CRTs need more subjects than individually randomized trials to be adequately powered. Because of correlation between subjects from the same cluster, the effective sample size is smaller than the total number of subjects. If N is the total number of subjects needed to achieve power (1 − β) in an individually randomized study, the number of subjects needed for the same power in a CRT is N multiplied by the design effect DE = [1 + (m − 1)ρ], where m is the average cluster size and ρ is a measure of within-cluster correlation, known as the intracluster (intraclass) correlation coefficient (ICC). Despite the relative inefficiency and methodological complexities, researchers often choose cluster randomized designs over individually randomized trials for their logistical and practical convenience, or to avoid the possibility of treatment group contamination, or because the interventions are naturally delivered at the cluster level, among other reasons. CRT methodology is generally well developed (e.g., see Murray, 1998; Donner and Klar, 2000; Hayes and Moulton, 2009).