This chapter covers various methods for nonlinear dimensionality reduction, where the nonlinear aspect refers to the mapping between the high-dimensional space and the low-dimensional space. We start off by discussing a method that has been around for many years called multidimensional scaling. We follow this with several more recently developed nonlinear dimensionality reduction techniques called locally linear embedding, isometric feature mapping, and Hessian eigenmaps. We conclude this chapter by discussing some methods from the machine learning and neural network communities, such as self-organizing maps, generative topographic maps, curvilinear component analysis, autoencoders, and stochastic neighbor embedding.