In this thesis, we present a new approach to view synthesis that avoids the above problems by synthesizing new views from existing images of a scene. Using an image-based representation of scene geometry computed by stereo vision methods, a global model can be avoided, and realistic new views can be synthesized quickly using image warping.
The new application of stereo for view synthesis makes it necessary to re-evaluate the requirements on stereo algorithms. We compare view synthesis to several traditional applications of stereo, and conclude that stereo vision is better suited for view synthesis than for applications requiring explicit 3D reconstruction. We also discuss ways of dealing with partially occluded regions of unknown depth and with completely occluded regions of unknown texture, and present experiments demonstrating that it is possible to efficiently synthesize realistic new views even from inaccurate and incomplete depth information.
This thesis also contributes several novel stereo algorithms that are motivated by the specific requirements imposed by view synthesis. We introduce a new evidence measure based on intensity gradients for establishing correspondences between images. This measure combines the notions of similarity and confidence, and allows stable matching and easy assigning of canonical depth interpretations in image regions of insufficient information. We also present new diffusion-based stereo algorithms that are motivated by the need to correctly recover object boundaries. In particular, we develop a novel Bayesian estimation technique that significantly outperforms area-based algorithms using fixed-sized windows. We provide experimental results for all algorithms on both synthetic and real images.
The thesis is available as CS Techreport TR96-1604.