The 3D reconstruction of a face from a single frontal image is an ill-posed problem. This is further accentuated when the face image is captured under different poses and/or complex illumination conditions. In this paper, we aim to solve the shape recovery problem from a single facial image under these challenging conditions. The local image models for each patch of facial images and the local surface models for each patch of 3D shape are learned using a non-linear dimensionality reduction technique, and the correspondences between these local models are then learned by a manifold alignment method. By combining the local shapes, the global shape of a face can be reconstructed directly using a single least-square system of equations. We perform experiments on synthetic and real data, and validate the algorithm against the ground truth. Experimental results show that our method can yield accurate shape recovery from out-of-training samples with a variety of pose and illumination variations.