# Real World Reconstruction

## Changes from 97009778412a583b0d938693375f0e9a1ed6f442 to current

---
title: Real World Reconstruction
---

**Briefing** [Real World Lecture]()

[Cookbook]
Chapter 9 in  *OpenCV 3 Computer Vision with Python Cookbook* by
Alexey Spizhevoy (author).
Search for it in [Oria](https://oria.no/).  There is an e-book available.

# Exercises

To goal for the last exercise is to test a more complete
system for 3D reconstruction, using OpenCV.
The Cookbook by Alexey Spizhevoy is a useful starting point,
using selected recipes from Chapter 9.

You can choose if you want to record you own data,
using external webcams, or use the dataset from the cookbook.
If you want to use external cameras, you may have to collaborate
and share, but then group work is a good idea in any event.

## Using the cookbook

+ Many recipes load image data, which are available on
[github](https://github.com/PacktPublishing/OpenCV-3-Computer-Vision-with-Python-Cookbook)
+ Many recipes load data saved in previous recipes
+ the save instruction is not shown in the book
+ included in the complete code on
[github](https://github.com/PacktPublishing/OpenCV-3-Computer-Vision-with-Python-Cookbook)
+ When you use the recipe, you should always ask yourself,
*what does the code do?*
+ If there is code you do not understand, you should ask.

## Stereo Calibration.

Recipe p 240ff in the cookbook: *Stereo rig calibration*

Many features of this recipe should be known from our
previous [Camera Calibration]() exercise.

1.  Test the recipe with your choice of data.
2.  What do the various output mean?
3.  Match the calculations in the recipe to the
textbook theory.

## The Fundamental Matrix.

Recipe p 257ff in the cookbook: *Epipolar Geometry*

Note that this recipe does not solve the general problem
of recovering the fundamental and/or essential matrices.
It actually depends on the calibration matrix
from *Stereo rig calibration* (above),
and it chessboard points, making the problem planar.

1.  Test the recipe with your choice of data.
from *Stereo rig calibration* (above).  Do the new
calculations match the previous ones?

## Recovering Rotation and Translation

Recipe p 259ff in the cookbook: *Essential matrix decomposition*

1.  Test the recipe with your choice of data.
2.  Find the axis and angle of rotation for each of the two candidate
rotation matrices.  You can use the logarithm as described
in Theorem 2.8 in the text book.
3.  Which of the two rotation matrices is correct?
Try to answer this by visual inspection of the images.

## Triangulation

Recipe p 250f in the cookbook: *Restoring a 3D point*

Note that the recipe uses synthetic data.

1.  Test the recipe with your choice of data.
2.  How is the relative pose of the cameras defined in
the recipe?

## Triangulation of Real Data

There seems not be be a recipe for triangulation in real data,
but we can try to make one based on all the recipes tested
so far.

### Step 1. Dataset

1.  Make a stereo camera, i.e. a fixed positioning of two cameras.
Make sure to keep it fixed throughout data collection.
2.  Use a chessboard to make a set of images for calibration.
2.  Find some objects with a lot of corners and take images of
each one with your stereo setup.

It is a good idea to take more images than you think you need,
as it may be difficult to keep the camera setup fixed as you
fiddle with the programming.

### Step 2. Relative Pose

Obtain the relative pose \$(R,T)\$ of the two cameras by
repeating the recipes up until the recovery of rotation and
translation.

### Step 3. Corner Detection

Use the Harris Corner Detector to find corners in the the
object images.  (You can do it in the chessboard images, but
that is less interesting.)

If you have time, you can use SIFT to match image pairs.
If you don't, please match them manually.

### Step 4. 3D Reconstruction

Here, we use the `cv2.triangulatePoints` as in the previous
triangulation recipe.  The challenge is to define the
projection matrices, `P1` and `P2`, for our specific
stereo camera.

+ Recall that the projection is typically given as \$\Pi=K\Pi_0g\$, where
+ \$K\$ is the intrinsic camera matrix
+ \$\Pi_0\$ is the basic projection, constructed with the `np.eye` function
+ \$g=[R,T]\$ is the relative pose of the cameras.
+ In the *Stereo Calibration*, you have found \$K\$ for each camera, i.e. `Kl` and `Kr`
+ If we select the first camera frame as world frame,
+ the first camera has \$g={I,0]\$
+ the first camera has \$g=[R,T]\$ given by the relative pose of the cameras,
as calculated in the previous recipe

Constructing the two \$\Pi\$ matrices, you can recover the 3D points
as in the previous Triangulation recipe.

### Step 5. Visualisation

Take all the 3D points from the previous step and plot
them using matplotlib.  Does it look correct?

Note that we have not had time to use edge information to
reconstruct the 3D scene, and this makes the reconstruction
rather rudimentary.  It is a good start, though.

### Step 6. Reflection and Summary

Review this last exercise, and discuss,

1.  What assumption have been necessary in the work?
2.  What data do you require to do 3D reconstruction in this way?
3.  To what extent is the recipe relevant to real life?
- i.e. what kind of problems can and can't you solve this way?