Revision 2c091980cbd99764ce681630e660081b406d3d37 (click the page title to view the current version)
Changes from 2c091980cbd99764ce681630e660081b406d3d37 to current
---
title: Edge Detection
categories: session
---
**Date**
**Date** 13 or 14 October
**Briefing** Status on the Tracker Project.
If we need the Thursday session for this only, the Edge Detction will be postponed
to the Friday.
**Briefing** [Edge Lecture]()
# Exercise
**Reading** Ma (2004) Ch 4.4;
Tutorials on OpenCV:
[Canny Edge Detection](https://docs.opencv.org/4.x/da/d22/tutorial_py_canny.html);
[Hough Circle Transform](https://docs.opencv.org/3.4/d9/db0/tutorial_hough_lines.html)
Implement a prototype able to track a simple object in a video scene.
You have full freedom to do this as you please, but bear in mind
that we only ask for a *prototype*. The goal is to learn constituent
techniques, not to make a full solution for production.
**Debrief**
We look at
[Hough Circle Transform](https://docs.opencv.org/3.4/d9/db0/tutorial_hough_lines.html)
as an example of reading mathematical texts.
1. You can set up your own scene, recording your own video, with a
characteristic, brightly coloured object moving through the scene.
E.g. a bright, red ball rolling on the floor.
2. Start with the feature detector. Make sure that it works.
3. Can you use the feature detector to detect your particular object in
a still image?
4. Visualise the detected object, by drawing a frame around it in the image.
5. Introduce tracking only when you have a working prototype for still
images.
# Exercises
## Python API
This is based on Ma (2004) Exercise 4.9, which is written for Matlab.
# Debrief
### The Canny
1. Find a test image.
2. Test the `Canny` edge detector in OpenCV.
See the [tutorial](https://docs.opencv.org/4.x/da/d22/tutorial_py_canny.html)
for an example.
What kind of data does it generate? What do the data look like?
3. Experiment with different thresholds and different window sizes
(apertures).
See the [docs](https://docs.opencv.org/3.4/dd/d1a/group__imgproc__feature.html#ga04723e007ed888ddf11d9ba04e2232de) for overview of the parameters
for `Canny`.
It is not difficult to implement your own Canny edge detector.
The exercise would be very similar to the Harris corner detector,
and add little new.
### Connected Components
The edge detector gives a binary image. How can you find collections
of pixels forming edges?
You can either,
1. implement your own connected components function, using the ideas
from the [briefing](Edge Lecture), or
2. test the `ConnectedComponents` function in OpenCV.
Visualise the components you find, for instance by using different
colours Do they correspond to the objects *you* see in the image?
### Line fitting
If you do not have time to try both approaches, that's all right,
but you should at least try one. Feel free to choose,
#### Basic approach
1. Implement a simple line fitter using the ideas from the
[briefing](Edge Lecture).
2. Can you identify straight lines among the components?
3. Calculate the angle $\theta$ and the distance $\rho$ from the
origin for each component.
#### Hough transform
1. Run through the tutorial to [Hough Circle Transform](https://docs.opencv.org/3.4/d9/db0/tutorial_hough_lines.html)
2. Tweak the code to print out the co-ordinates of the lines detected, that is $\theta$ and $\rho$
3. Write a function to find the lines intersect the $x$- and $y$-axes, and list this information too.
4. Can you see (easily) where each edge ought to be in the visual image?
5. Write a routine, using OpenCV or otherwise, to plot the lines from the Hough transform on top of
the image from the Canny detector. Do they match? It is probably best if you use different colours.
- You can make an RGB image and copy the result from Canny into one colour channel, and write the edges
in a different one.
## Project
1. Can you use edge detection in your tracker project?
2. Is it possible to match the edges to the object you want to track?
3. Can the multiple connected components be used to give an idea about
different objects in the scene?
Use the rest of the time to improve the tracker.