Revision 7d2962bc2a94f8a46d0723df1f381a06f2af565d (click the page title to view the current version)

SIFT and Feature Matching

It is probable that both Eirik and Hans Georg will be off sick, and you will be left to your own work. This is unfortunate, but there is little theoretical material in this session, and we believe you can make your way with the practical exercise, relying on OpenCV documentation.

Briefing

In feature tracking, we use the time derivative to track each feature point once detected.

In feature matching, we compare images which are not necessarily related on the time axis. In other words, we need to compare features directly, and pair corresponding features in two independent images.

Two identify features, we use feature descriptors. One of the most popular descriptors is called SIFT - Scale Invariant Feature Transform.

Unfortunately, this is not described in the textbook. For details, one can refer to Szeliski’s textbook, which is currently available as drafts for the second edition. SIFT is described on page 435ff in the version of 30 September this year..

The principle used by SIFT is to gather statistics from the neighbourhood around the feature point, forming a vector which is used as an identifier.

Exercises

In the exercise, you should use the OpenCV implementation of SIFT, and test feature matching between pairs of imnages.

Start by finding one or more pairs of images. There has to be an overlap (30-40%) where you expect to find corners in both images. You can of course use two frames from the video you used in the previous exercise. It may be a good idea to test different pairs, to learn what can and cannot be matched.

Exercise 1

Use one of the two images in this exercise. We are going to compare the results of the Harris detector with the SIFT detector.

  1. Detect and draw corners as you have done earlier, using the harris corner detector.
  2. Repeat using the SIFT detector implemented in opencv, e.g.:

    Compare the results, how does the keypoints correlate?
  3. Opencv has the function cv.drawKeypoints that can be used to draw keypoints from the SIFT detector. Look up the documentation and test this funcion.
  4. When the key points are drawn, the function will also visualize the “strength” of the keypoint with a larger circle, Does the strength affect the correlation?

Exercise 2

Following is a code-snippet to print some information from one of the SIFT-keypoints:

Find a few keypoints and consider the meaning of the information from the snippet above.
NB: The keypoints are not sorted, you can sort them by e.g. strength (descending) with the following code:

kps_s, desc_s = zip(*sorted(list(zip(kps, desc)), key=lambda pt: pt[0].response, reverse=True))

Exercise 3

Apply SIFT to both images, where do you expect to find overlapping features?

Implement a brute-force feature matcher, using the Euclidian distance between the keypoint descriptors. In other words, for a given a point (descriptor) \(X_1\) in image 1, we calculate the Euclidean distance from this descriptor to each of the point (descriptor) in image 2. The point \(X_1\) is matched with the point \(X_2\) in image 2 that has the smallest Euclidean distance.

The Euclidian distance can be calculated using the cv.norm with normType=cv.NORM_L2.
More information on feature-matching can be found in chapter 7.1.3 of Szeliski’s textbook textbook,

Combine/stack the two images horizontally and draw a line between the matching keypoints.

  • You can combine with e.g. np.concatenate((img1, img2), axis=1) (Only for gray-scale images of the same size)
  • You can draw a line with cv.line
  • Use the OpenCV documentation for details.

Exercise 4

Let us now review what kind of errors we have in the matching from Exercise 3.

  • True Positive is a correctly matched feature point.
  • False Positive is an incorrectly matched feature point.
  • False Negative is a point falsely left unmatched
  • True Negative is a non-match correctly rejected.

  • Where do you have True-Positives, False-Positives, False-Negatives and True-Negatives?

If you do not have any false-positives/Negatives, try with a more complex image set.

(Optional) Exercise 5

To remove false-positives we can use Lowe’s ratio test, where we remove keypoints that have multiple possible matches.

From StackOverflow:
“Short version: each keypoint of the first image is matched with a number of keypoints from the second image. We keep the 2 best matches for each keypoint (best matches = the ones with the smallest distance measurement). Lowe’s test checks that the two distances are sufficiently different. If they are not, then the keypoint is eliminated and will not be used for further calculations.”

Implement this filtering function.

Debrief