Revision 9fa963924e8e46e33af266e41e39e85d1e473e2f (click the page title to view the current version)

Edge Detection

Date 29 September 2021

Briefing Edge Lecture

Exercise

Implement a prototype able to track a simple object in a video scene. You have full freedom to do this as you please, but bear in mind that we only ask for a prototype. The goal is to learn constituent techniques, not to make a full solution for production.

  1. You can set up your own scene, recording your own video, with a characteristic, brightly coloured object moving through the scene. E.g. a bright, red ball rolling on the floor.
  2. Start with the feature detector. Make sure that it works.
  3. Can you use the feature detector to detect your particular object in a still image?
  4. Visualise the detected object, by drawing a frame around it in the image.
  5. Introduce tracking only when you have a working prototype for still images.

Modularisation

There are several subproblems which can most easily be handled separately. Try to identify such problem, solve them one by one, and test the solutions. The ones listed below are just examples.

Time derivative

One key challenge is to calculate the time derivative \(I_t\).

  1. The easiest approach will be to take each pixel, and make a one-dimensional signal by taking the same pixel from every frame, and apply a (1D) Sobel filter (or other derivation filter). From these per-pixel \(I_t\) signals, a frame (matrix) \(I_t(t)\) for each time step can be reconstructed.
  2. It is possible to process all the pixel signals in parallel using matrix operations in numpy. This requires a little more abstract thought, but once mastered, the code is simpler and more efficient.

Debrief