Many tasks in computer vision, such as determining camera motion, estimating depth from stereo, and searching image databases are based on features which are matched between images. Feature detectors and descriptors such as SIFT are widely used, and their results are thought of as 'point features'. These features, however, are not just points - they have an orientation and a scale. In this talk I will present recent work which shows that the orientation and scale information can help with two key tasks in computer vision - finding feature matches and determining the relative pose (motion) of two cameras.
Last modified: Tuesday, 15-Sep-2015 11:29:24 NZST
This page is maintained by the seminar list administrator.