Revision 0c624e8ca81953eb04087720851e3c22f8417c53 (click the page title to view the current version)


Changes from 0c624e8ca81953eb04087720851e3c22f8417c53 to 9d8c58393e35d3502fbb3cd0a579e9928adfd1e3

title: Neural Networks Lecture
categories: lecture

# Briefing

## What is a newural network

+ The single Neuron
    + Weighted Input
    + Activation
+ The network model
    + Input/Output
    + Weights
    + Activation Function
+ The Tensor Model

## Output and Loss Function

+ Classification versus Regression


$$L = (x-y)^2$$


$$L = \log \frac{ \exp x_{y} } { \sum \exp x_i }$$

## Training

+ Optimisation problem
    + tune the weights to minimise the loss function
    + if the activation function is differentiable, the entire system is
    + different optimisation algorithms;
      trust the API or do a more advanced module

## Activation Functions

+ Threshold functions
+ Approximations to the threshold function
+ Logistic: $f(x) = \frac1{1+e^{-\beta x}}$
+ ReLU: $f(x)=\max(x,0)$
    - not differentiable

## Tools

## Sample Problem
Two main contenders.

+ TensorFlow
+ PyTorch
    + A replacement for NumPy to use the power of GPUs and other accelerators.
    + An automatic differentiation library that is useful to implement neural networks.

Note that PyTorch replaces NumPy; i.e. it is primarily a python tool, 
and operaes in the object oriented framework of python.

The reason for using PyTorch in these examples is primarily that I have
lately been working off some code created by some final year students 
this Spring, and they happened to choose PyTorch.
The choice of TensorFlow or PyTorch is otherwise arbitrary.

## Sample Program