Revision 08e6407c23fd0c325d8a5a487c708efaabf377a3 (click the page title to view the current version)

3D Reconstruction

Changes from 08e6407c23fd0c325d8a5a487c708efaabf377a3 to 8d1bf3d2732a218f172e769429f85a13608a3ac6

---
title: 3D Reconstruction (continuing Eight-point algorithm)
categories: session
---

**Briefing** [Eight-point algorithm Lecture]()

**Additional Reading**
Chapter 9 in  *OpenCV 3 Computer Vision with Python Cookbook* by
Alexey Spizhevoy (author).
Search for it in [Oria](https://oria.no/).
There is an e-book available.

**Learning Outcome**
See how the eight-point algorithm can be fitted into a complete
system to do 3D reconstruction from a real stereo view.

# Exercises

In this exercise, we shall try to determine the relative pose
of two cameras, using the eight-point algorithm (or a variant thereof).

## Step 1.  Make a Data Set

1.  Take two images of the same scene, using different camera poses.
    - the difference between the poses should be significant,
      but small enough to recognise the same feature points.
    - i.e. two consecutive frames from a video will probably be too similar.
2.  Run the Harris Detector on both images, and identify at least
    eight features which you can pair between the images.
    - if you do not find eight, you need to use more similar poses.

**Note 1**
It may be useful to calibrate the camera(s) and undistort the
images before starting.  It is ok to try without calibration first,
for the sake of simplicity.

**Note 2**
you should pair the feature points manually in this exercise, to make
sure that no mismatches ruin your results. 
When you have the first prototype working, you can try to pair feature
points programmatically, using SIFT or other methods to match features.

## Step 2.  Eight-Point Algorithm

Use the [Eight-point algorithm]() from the previous exercise (Part 2)
to recover the relative pose $(R,T)$ between the cameras.

Does the transformation $(R,T)$ seem reasonable?  
What does it mean in terms of rotation and translation in the
real world space?

## Step 3.  3D Reconstruction

Calculate 3D co-ordinates in the global frame for each of the features
from Step 1.  Do the co-ordinates seem reasonable?

## Step 4.  (Optional)  Visualisation

Visualise the reconstructed points in 3D, using for instance
`matplotlib` in python.

## Step 5.  (Optional)  Automatic Matching

Extend your system to use SIFT to automatically match features
in Step 1.

# Debrief

This example is under construction, and there is a problem
with the recovery of the relative pose.  It does, however,
exemplify the broad structure of a solution;

+ [Jupyter Notebook](Python/Triangulation.ipynb) which depends
  on two images: [Python/frame1glasses1.png]() and
  [Python/frame2glasses1.png]()