This thesis introduces new methods for markerless tracking of the full articulated motion of
hands and for informing the design of gesture-based computer input. Emerging devices such
as smartwatches or virtual/augmented reality glasses are in need of new input devices for
interaction on the move. The highly dexterous human hands could provide an always-on input
capability without the actual need to carry a physical device. First, we present novel methods
to address the hard computer vision-based hand tracking problem under varying number of
cameras, viewpoints, and run-time requirements. Second, we contribute to the design of
gesture-based interaction techniques by presenting heuristic and computational approaches.
The contributions of this thesis allow users to effectively interact with computers through
markerless tracking of hands and objects in desktop, mobile, and egocentric scenarios.