In this thesis, an encompassing procedure is presented to generate different general models of human pose. A database of 3D~scans spanning the space of human pose and shape variations is introduced. Then, four different approaches for transforming the database into a general model of human pose and shape are presented, which improve the current state
of the art. Experiments are performed to evaluate and compare the proposed models on real-world problems, i.e. characters are generated given semantic constraints and the underlying shape and pose of humans given 3D~scans, multi-view video, or uncalibrated monocular images is estimated.