--- title: Edge Detection categories: session --- **Date** 29 September 2021 **Briefing** [Edge Lecture]() # Exercise Implement a prototype able to track a simple object in a video scene. You have full freedom to do this as you please, but bear in mind that we only ask for a *prototype*. The goal is to learn constituent techniques, not to make a full solution for production. 1. You can set up your own scene, recording your own video, with a characteristic, brightly coloured object moving through the scene. E.g. a bright, red ball rolling on the floor. 2. Start with the feature detector. Make sure that it works. 3. Can you use the feature detector to detect your particular object in a still image? 4. Visualise the detected object, by drawing a frame around it in the image. 5. Introduce tracking only when you have a working prototype for still images. ## Modularisation There are several subproblems which can most easily be handled separately. Try to identify such problem, solve them one by one, and test the solutions. The ones listed below are just examples. Make sure that you document partial results, and take a note of what you learn from each one. The code is not the only output of a design step. ### Time derivative One key challenge is to calculate the time derivative $I_t$. 1. The easiest approach will be to take each pixel, and make a one-dimensional signal by taking the same pixel from every frame, and apply a (1D) Sobel filter (or other derivation filter). From these per-pixel $I_t$ signals, a frame (matrix) $I_t(t)$ for each time step can be reconstructed. 2. It is possible to process all the pixel signals in parallel using matrix operations in numpy. This requires a little more abstract thought, but once mastered, the code is simpler and more efficient. ### Partial solutions and experimentation Try to identify tests which you can make to check that parts of the code works as intended. For instance, you should attempt tracking from one frame to the next. You need to use more frames for the derivative (depending on the length of the filter), but focus on the feature point in two frames only. 1. Where is the feature point in the first frame? 2. Where does the feature point move according to the tracking algorithm? 3. Where is the feature point actually in the second frame according to the corner detector? You may find some discrepancy between 2 and 3. The algorithms are not perfect. # Debrief