Matching points between multiple images of a scene is a vital component of many computer vision tasks. Point matching involves creating a succinct and discriminative descriptor for each point. While current descriptors such as SIFT can find matches between features with unique local neighborhoods, these descriptors typically fail to consider global context to resolve ambiguities that can occur locally when an image has multiple similar regions. This paper presents a feature descriptor that augments SIFT with a global context vector that adds curvilinear shape information from a much larger neighborhood, thus reducing mismatches when multiple local descriptors are similar. It also provides a more robust method for handling 2D nonrigid transformations since points are more effectively matched individually at a global scale rather than constraining multiple matched points to be mapped via a planar homography. We have tested our technique on various images and compare matching accuracy between the SIFT descriptor with global context to that without.
A SIFT descriptor with global context
01.01.2005
2353751 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
Elsevier | 2025
|British Library Online Contents | 2015
|Keypoint descriptor matching with context-based orientation estimation
British Library Online Contents | 2014
|British Library Conference Proceedings | 2006
|