Home

Demonstrations

Simulated Human Vision..... Ian Overington

Location: Eastbourne. UK
ianoverington@simulatedvision.co.uk ............ www.simulatedvision.co.uk

Stereo Fusion

For stereo, on the assumption that the orientation of the stereo baseline is known (ideally horizontal), the processing for fusion is very well behaved and relatively easy to demonstrate. Here, since there is no uncertainty normal to the stereo baseline, all local displacements may be resolved into their components along & normal to the baseline. As such, a much more direct computation may be carried out after grouping collections of local displacements, subject only to ignoring contributions associated with local edges which themselves are nearly parallel to the baseline.

To the right is a pair of images captured by a high resolution digital camera (2400 x 1800 pixels) with a considerable lateral displacement in order to generate a stereo view. If greyscale images are created from each of the originals and then a red / cyan point to point overlay is carried out, the major areas of mismatch will be seen as either red or cyan patches, with the rest of the composite being shades of grey.

It will be seen from this that there is a major mismatch between the two images (which is sufficient to mask completely the relatively small differential disparities due to stereo effects). The task is, therefore, to determine the best overall fit, such that the underlying stereo depth data may be sensed. [For a full reporting of this, see ‘Fusion01.pdf’]

Continued