Revision 9c144488e2da66c0faba92f3e969d27d81541bc9 (click the page title to view the current version)

Introduction

Changes from 9c144488e2da66c0faba92f3e969d27d81541bc9 to ac986e13f38ca37d0fe8f9320a6ca663d3b91449

---
title: Introductory Session to Machine Learing
---

# Reading

+ Ma 2004 Chapter 1.

# Session

1.  **Briefing** Overview and History
2.  Install and Test Software
    - Simple tutorials
3.  **Debrief** questions and answers
    - recap of linear algebra


# 1 Briefing

## Practical Information

### Information

+ Wiki - living document - course content
+ BlackBoard - announcements - discussion fora
+ Questions - either
    - in class
    - in discussion fora
+ Email will only be answered when there are good reasons not to use public fora.

### Taught Format

+ Sessions 4h twice a week
    - normally 1h briefing + 2h exercise + 1h debrief (may vary)
+ Exercises vary from session to session
    + mathematical exercises
    + experimental exercises
    + implementational exercises
+ **No** Compulsory Exercises
+ **Feedback in class** 
    - please ask for feedback on partial work
+ Keep a diary.  Make sure you can refer back to previous partial solution and reuse
  them.


### Learning Outcomes

+ Knowledge
    - The candidate can explain fundamental mathematical models for digital imaging,
      3D models, and machine vision
    - The candidate are aware of the principles of digital cameras and image capture
+ Skills
    - The candidate can implemented selected techniques for object recognition and
      tracking
    - The candidate can calibrate cameras for use in machine vision systems
+ General competence
    - The candidate has a good analytic understadning of machine vision and of the 
      collaboration between machine vision and other systems in robotics
    - The candidate can exploit the connection between theory and application for 
      presenting and discussing engineering problems and solutions

### Exam

+ Oral exam $\sim 20$ min.
+ First seven minutes are *yours* 
    - make a case for your grade wrt. learning outcomes
    - your own implementations may be part of the case
    - essentially that you can explain the implementation analytically
+ The remaing 13-14 minutes is for the examiner to explore further
+ More detailed assessment criteria will be published later

## Vision

![Eye Model from *Introduction to Psychology* by University of Minnesota](Images/eye.jpg)

+ Vision is a 2D image on the retina
    + Each cell perceives the light intencity of colour of the light projected thereon
+ Easily replicated by a digital camera
    + Each pixel is light intencity sampled at a given point on the image plane

## Cognition

![1912 International Lawn Tennis Challenge](Images/tennis.jpg)

+ Human beings see 3D objects
    - not pixels of light intencity
+ We *recognise* objects - *cognitive schemata*
    - we see a *ball* - not a round patch of white
    - we remember a *tennis match* - 
      more than four people with white clothes and rackets
+ We observe objects arranged in depth
    - in front of and behind the net
    - even though they are all patterns in the same image plane
+ 3D reconstruction from 2D retina image
    - and we do not even think about how

## Applications

- Artificial systems interact with their surroundings
    - navigate in a 3D environment
- Simpler applications
    - face recognition
    - tracking in surveillance cameras
    - medical image diagnostics (classification)
    - image retrieval (topics in a database)
    - detecting faulty products on a conveyor belt (classification)
    - aligning products on a conveyor belt 
- Other advances in AI creates new demands on vision
    - 20 years ago, walking was a major challenge for robots
    - now robots walk, and they need to see where they go ...


## Focus

- Artificial systems interact with their surroundings
    - navigate in a 3D environment
- This means
    - Geometry of multiple views
    - Relationship between theory and practice
    - ... between analysis and implementation
- Mathematical approach
    - inverse problem; 3D to 2D is easy, the inverse is hard
    - we need to understand the geometry to know what we program

##  History 

- 1435: *Della Pictura* - first general treatise on perspective
- 1648 Girard Desargues - projective geometry
- 1913 Kruppa: two views of five points suffice to find
    - relative transformation 
    - 3D location of the points 
    - (up to a finite number of solutions)
- mid 1970s: first algorithms for 3D reconstruction
- 1981 Longuet-Higgins: linear algorithm for structure and motion
- late 1970s E. D. Dickmans starts work on vision-based autonomous cars
    - 1984 small truck at 90 km/h on empty roads
    - 1994: 180 km/h, passing slower cars

## Python

- Demos and tutorials in Python
    - you can use whatever language you want
    - we avoid Jupyter to make sure we can use camera and interactive displays easily
- Demos and help on Unix-like system (may or may not include Mac OS)
- In the exercise sessions
    - install necessary software
    - use the tutorials to see that things work as expected
- In the debrief, we will start briefly on the mathematical modelling

# 2 Lab Practice

The most important task today is to install and test a number
of software packages needed.  

## Install Python

We will use Python 3 in this module.
You need to install the following.

1. [Python](https://www.python.org/downloads/)
2. [pip](https://packaging.python.org/tutorials/installing-packages/).
    (the package manager for python)
3. [iPython](https://ipython.org/install.html)
    (a more convenient interactive interpreter)

How you install these three packages depends on your OS.
In most distroes you can use the package system to install all of this,
for instance in Debian/Ubuntu: 

```sh
sudo apt-get install ipython3 python3-pip
```

(Python is installed automatically as a dependency of iPython3.
Note, you have to specify version 3 in the package names,
lest you get Python 2.)

## Install Python Packages

Python packages are installed most easily using python's own
packaging tool, pip, which is independent of the OS.
It is run from the command line.

Depending on how you installed pip, it may be a good idea to upgrade

```sh
pip3 install --upgrade pip
```

Then we install the libraries we need.
You can choose to install either in user space or as root.

User space:

```sh
pip3 install --user matplotlib numpy opencv-python
```

As root:

```sh
sudo pip3 install matplotlib numpy opencv-python
```

+ numpy is a standard library for numeric computations.
  In particular it provides a data model for matrices with
  the appropriate arithmetic functions.
+ matplotlib is a comprehensive library for plotting, both in 2D and 3D. 
+ [OpenCV](https://opencv.org/) 
  is a Computer Vision library, written in C++ with bindings for
  several different languages.

A third installation alternative is to use
[Virtual Environments](https://docs.python.org/3/tutorial/venv.html),
which allows you to manage python versions and dependencies separately
for each project.  This may be a good idea if you have many python 
projects, but if this is your first one, it is not worth the hassle.

## Run iPython

Exactly how you run iPython may depend on you OS.
In Unix-like systems we can run it straight from the command line:

```sh
ipython3
```

This should look something like this:

```
georg$ ipython3 
Python 3.7.3 (default, Jul 25 2020, 13:03:44) 
Type 'copyright', 'credits' or 'license' for more information
IPython 7.3.0 -- An enhanced Interactive Python. Type '?' for help.

In [1]: print("Hello World")                                                    
Hello World

In [2]: import numpy as np                                                      

In [3]: np.sin(np.pi)                                                           
Out[3]: 1.2246467991473532e-16

In [4]: np.sqrt(2)                                                              
Out[4]: 1.4142135623730951

In [5]:                                                                         
```

## Some 3D Operations

In this chapter, we will define a simple 3D Object and display it in python.
The 3D object is an irregular tetrahedron, which has four corners and four
faces.


```python
import numpy as np
from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d.art3d import Poly3DCollection
```

Firtsly, we define the three corners of the tetrahedron.

```python
corners = [ [-1,0.5,0.5], [+1,0.5,0.5], [0,-0.5,0.5], [0,0.5,-0.5] ]
```

Each face is adjacent to three out of the four corners, and can
also be defined by these corners.

```python
face1 = [ corners[0], corners[1], corners[2] ] 
face2 = [ corners[0], corners[1], corners[3] ] 
face3 = [ corners[0], corners[2], corners[3] ] 
face4 = [ corners[1], corners[2], corners[3] ] 
```

To represent the 3D structure for use in 3D libraries,
we juxtapose all the faces and cast it as a matrix.


```python
vertices = return np.array(face1+face2+face3+face4,dtype=float)
print(vertices)
```

Observe that the vertices (corners) are rows of the matrix.
The mathematical textbook model has the corners as columns, and
this is something we will have to deal with later.

We define the 3D object `ob` as follows.

```python
ob = Poly3DCollection(vertices, linewidths=1, alpha=0.2)
```

The `alpha` parameter makes the object opaque.
You may also want to play with colours:

```python
ob.set_facecolor( [0.5, 0.5, 1] )
ob.set_edgecolor([0,0,0])
```

To display the object, we need to create a figure with axes.

```python
plt.ion()
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
plt.show()
```

Note the `plt.ion()` line.  You do not use this in scripts, but in
`ipython` it means that control returns to the prompt once the 
figure is shown.  It is necessary to continue modifying the plot after
it has been created.

Now, we can add our object to the plot.

```python
ax.add_collection3d(ob)
```

Quite likely, the object shows outside the range of the axes.
We can fix this as follows:

```
s = [-2,-2,-2,2,2,2]
ax.auto_scale_xyz(s,s,s)
```

These commands make sure that the axes are scalled so that the two
points `(-2,-2,-2)` and `(2,2,2)` (defined in the list `s`)
are shown within the domain.

## Rotation and Translation of 3D objects

Continuing on the previous section, our 3D object `ob` is defined 
by the `vertices` matrix, where all the rows are points in space.
Motion is described by matrix operations on `vertices`.

### Translation

**TODO**
Let us define another vector in $\mathbb{R}^3$ and add it
to each point.

```python
translation = numpy.array( [ 1, 0, 0 ], dtype=float ) 
v2 = vertices + translation
print(v2)
```

Not that this operation does not make sense in conventional
mathematics.  We have just added a $1\times3$ matrix to an $N\times3$ matrix.
How does python interpret this in terms of matrices?

To see what this means visually in 3D space, we can generate
a new 3D object from `v2`.  We use a different face colour 
for clarity.

```python
ob2 = Poly3DCollection(v2, linewidths=1, alpha=0.2)
ob2.set_facecolor( [0.5, 1, 0.5] )
ob2.set_edgecolor([0,0,0])
ax.add_collection3d(ob2)
```

How does the new object relate to the first one in 3D space?

**TODO** Check

### Rotation

**TODO**

### Homogenous Co-ordinates

**TODO**

## Some Camera Operations

```python
import numpy as np
import cv2 as cv
cap = cv.VideoCapture(0)
ret, frame = cap.read()
```

Now, `ret` should be `True`, indicating that a frame has
successfully been read.  If it is `False`, the following
will not work.

```python
gray = cv.cvtColor(frame, cv.COLOR_BGR2GRAY)
cv.imshow('frame', gray)
cv.waitKey(1) 
```

You should see a greyscale image from your camera.
To close the camera and the window, we run the following.

```python
cap.release()
cv.destroyAllWindows()
```

This example is digested from the tutorial on
[Getting Started with Videos](https://docs.opencv.org/master/dd/d43/tutorial_py_video_display.html).
You may want to do the rest of the tutorial.


# 3 Debrief

## Vectors and Points

+ A *point* in space $\mathbf{X} = [X_1,X_2,X_3]^\mathrm{T}\in\mathbb{R}^3$
+ A *bound vector*, from $\mathbf{X}$ to $\mathbf{Y}$: $\vect{\mathbf{XY}}$
+ A *free vector* is the same difference, but without any specific anchor point
   + represented as $\mathbf{Y} - \mathbf{X}$ 
+ Set of free vectors form a linear vector space
   + **note** points do not
   + The sum of two vectors is another vector
   + The sum of two points is not a point

## Dot product (inner product)

$$x=\begin{bmatrix}x_1\\x_2\\x_3\end{bmatrix}\quad
  y=\begin{bmatriy}y_1\\y_2\\y_3\end{bmatrix}$$

**Inner product**
$$\langle x,y\rangle = x^\mathrm{T}y = x_1y_1+x_2y_2+x_3y_3$$

Euclidean **Norm**
$$||x|| = \sqrt{\langle x,x\rangle}$$

**Orthogonal vectors** when $\langle x,y\rangle=0$

## Cross product

$$x\times y = 
\begin{bmatrix}
  x_2y_3 - x_3y_2 \\
  x_3y_1 - x_1y_3 \\
  x_1y_2 - x_2y_1 
\end{bmatrix} \in \mathbb{R}^3$$

Observe that

+ $y\times x = -x\times y$
+ $\langle x\times y, y\rangle= \langle x\times y, x\rangle

$$x\times y = \hat xy \quad\text{where}\quad \hat x =  
\begin{bmatrix}
  0 -x_3 x_2 \\
  x_3 0 -x_1 \\
  -x_2 x_1  0
\end{bmatrix} \in \mathbb{R}^{3\times3}$$

$\hat x$ is a **skew-symmetric** matrix because $\hat x=-\hat x^\mathrm{T}$

## Right Hand Rule

**TODO**

## Skew-Symmetric Matrix

**TODO**

## Change of Basis

**TODO**