Stereoscopic visual processing by a neurologically plausible "neural network"

Stereoscopic information in a pair of views of a scene resides in the relative
positions in the two views of the images of specific features in the external
world. If the displacements of the images can be determined unambiguously,
then the relative distances of the features can readily be obtained. This
process is one that many varieties of animals, including humans, seem to be
able to do quickly and reliably in a "natural" environment. On the other
hand, machine methods have not so far achieved a great deal of success except
in very constrained environments, largely because of the difficulty of
unambiguously matching corresponding images in the two views.

The work described builds on a "neural network" that was developed to extract
features from images in a manner that is similar to how the brain appears to
do so. This was extended to extract features from a pair of stereoscopic
views at a number of different resolutions. The positions of "corresponding"
features are then compared. Using this method, stereoscopic information can
be extracted reasonably reliably from simple test images, from relatively
simple natural scenes and from randon-dot stereograms.