Revision 10e482c8ecae99006a75e2ab1e15951c6da74e9b (click the page title to view the current version)

Edges

Changes from 10e482c8ecae99006a75e2ab1e15951c6da74e9b to current

---
title: Edge Detection
categories: session
---

**Date** 29 September 2021
**Date** 13 or 14 October

**Briefing** Status on the Tracker Project.  
If we need the Thursday session for this only, the Edge Detction will be postponed
to the Friday.

**Briefing** [Edge Lecture]()

# Exercise
**Reading** Ma (2004) Ch 4.4;
Tutorials on OpenCV:
[Canny Edge Detection](https://docs.opencv.org/4.x/da/d22/tutorial_py_canny.html);
[Hough Circle Transform](https://docs.opencv.org/3.4/d9/db0/tutorial_hough_lines.html)

Implement a prototype able to track a simple object in a video scene.
You have full freedom to do this as you please, but bear in mind
that we only ask for a *prototype*.  The goal is to learn constituent
techniques, not to make a full solution for production.
**Debrief** 
We look at
[Hough Circle Transform](https://docs.opencv.org/3.4/d9/db0/tutorial_hough_lines.html)
as an example of reading mathematical texts.

1.  You can set up your own scene, recording your own video, with a
    characteristic, brightly coloured object moving through the scene.
    E.g. a bright, red ball rolling on the floor.
2.  Start with the feature detector.  Make sure that it works.
3.  Can you use the feature detector to detect your particular object in
    a still image?
4.  Visualise the detected object, by drawing a frame around it in the image.
5.  Introduce tracking only when you have a working prototype for still
    images.
# Exercises

## Modularisation
## Python API

There are several subproblems which can most easily be handled separately.  Try to identify such problem, solve them one by one, and test the solutions.  The ones listed below are just examples.
This is based on Ma (2004) Exercise 4.9, which is written for Matlab.

Make sure that you document partial results, and take a note of what you learn from each one.  The code is not the only output of a design step.
### The Canny 

### Time derivative
1.  Find a test image.
2.  Test the `Canny` edge detector in OpenCV.
    See the [tutorial](https://docs.opencv.org/4.x/da/d22/tutorial_py_canny.html)
    for an example. 
    What kind of data does it generate?  What do the data look like?
3.  Experiment with different thresholds and different window sizes
    (apertures).
    See the [docs](https://docs.opencv.org/3.4/dd/d1a/group__imgproc__feature.html#ga04723e007ed888ddf11d9ba04e2232de) for overview of the parameters
    for `Canny`.

One key challenge is to calculate the time derivative $I_t$.
It is not difficult to implement your own Canny edge detector.
The exercise would be very similar to the Harris corner detector,
and add little new.

1. The easiest approach will be to take each pixel, and make a one-dimensional signal by taking the same pixel from every frame, and apply a (1D) Sobel filter (or other derivation filter). From these per-pixel $I_t$ signals, a frame (matrix) $I_t(t)$ for each time step can be reconstructed.
2. It is possible to process all the pixel signals in parallel using matrix operations in numpy.  This requires a little more abstract thought, but once mastered, the code is simpler and more efficient.
### Connected Components

### Partial solutions and experimentation
The edge detector gives a binary image.  How can you find collections
of pixels forming edges?

Try to identify tests which you can make to check that parts of the code works as intended.
You can either,

For instance, you should attempt tracking from one frame to the next.  You need to use more frames for the derivative (depending on the length of the filter), but focus on the feature point in two frames only.
1. implement your own connected components function, using the ideas
   from the [briefing](Edge Lecture), or
2. test the `ConnectedComponents` function in OpenCV.

1.  Where is the feature point in the first frame?
2.  Where does the feature point move according to the tracking algorithm?
3.  Where is the feature point actually in the second frame according to the corner detector?
Visualise the components you find, for instance by using different
colours  Do they correspond to the objects *you* see in the image?

You may find some discrepancy between 2 and 3.  The algorithms are not perfect.
### Line fitting

# Debrief
If you do not have time to try both approaches, that's all right,
but you should at least try one.  Feel free to choose,

#### Basic approach

1.  Implement a simple line fitter using the ideas from the
    [briefing](Edge Lecture).
2.  Can you identify straight lines among the components?
3.  Calculate the angle $\theta$ and the distance $\rho$ from the
    origin for each component.

#### Hough transform

1.  Run through the tutorial to [Hough Circle Transform](https://docs.opencv.org/3.4/d9/db0/tutorial_hough_lines.html)
2.  Tweak the code to print out the co-ordinates of the lines detected, that is $\theta$ and $\rho$
3.  Write a function to find the lines intersect the $x$- and $y$-axes, and list this information too.
4.  Can you see (easily) where each edge ought to be in the visual image?
5.  Write a routine, using OpenCV or otherwise, to plot the lines from the Hough transform on top of
    the image from the Canny detector.  Do they match?  It is probably best if you use different colours.
    - You can make an RGB image and copy the result from Canny into one colour channel, and write the edges
      in a different one.

## Project

1.  Take an image of a box on a table (or use another simple image with straight lines).
2.  Use the techniques discussed above.  Can you identify some of the edges on the box?
3.  Can you find edges that are connect to each other, possible finding a face on the
    box, delimited by four edges?
4.  Can you tell edges on the box apart from other edges in any way?

If you have time, you can start to make a prototype detecting an object (the box) and
tracking it from one frame to the next.