Traditional Shape-from-Shading (SFS) techniques aim to solve an under-constrained problem: estimating depth map from one single image. The results are usually brittle from real images containing detailed shapes. Inspired by recent advances in texture synthesis, we present an exemplar-based approach to improve the robustness and accuracy of SFS. In essence, we utilize an appearance database synthesized from known 3D models where each image pixel is associated with its ground-truth normal. The input image is compared against the images in the database to find the most likely normals. The prior knowledge from the database is formulated as an additional cost term under an energy minimization framework to solve the depth map. Using a generic small database consisting of 50 spheres with different radius, our approach has demonstrated its capability to obviously improve the reconstruction quality from both synthetic and real images with different shapes, in particular those with small details.