attention due to its far-reaching influence in telepresence and mixed
reality applications. Most of the previous systems first record video
sequences and then process them off-line to generate 3D models and novel
views. Recently some on-line processing systems have emerged, based on
either one of the two classic 3D reconstruction techniques---depth from
stereo or visual hull reconstruction.
In this talk I will present a novel system, which combines these two
methods for acquiring dynamic events in the real world at interactive
rates. First the silhouettes from multiple views are used to construct a
polyhedral visual hull as an initial estimation of the object in the
scene. Then the visual hull will be exploited to limit the disparity
range in the stereo computation. The use of silhouettes in the earlier
processing stage reduces the amount of computation significantly. The
restricted search range generated from the visual hull improves both the
speed and the quality of the stereo reconstruction. In return, the
stereo can compensate for some of the inherent drawbacks of the visual
hull method.