{
"cells": [
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
""
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"%%html\n",
""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Face Recognition\n",
"\n",
"Face recognition is a widely studies application in comptuer vision.\n",
"There are several types of face recognition problem, such as (in increasing order of difficulty)\n",
"* Given training examples of a specific person, verify if a new image is likely to be that person.\n",
"* Given training examples of many people, decide which person a new image represents.\n",
"* Given a large set of images identify which images are of the same people.\n",
"\n",
"In this part we'll look at a classic technique called *Eigenfaces* and see how it can be used to help solve these problems. Eigenfaces are an application of PCA to image data, and can be used to represent face images in a (relatively) low-dimensional space.\n",
"\n",
"## References\n",
"\n",
"L. Sirovich and M. Kirby [Low-Dimensional Procedure for the Characterisation of Human Faces](https://www.osapublishing.org/josaa/abstract.cfm?uri=josaa-4-3-519) Jnl. Optical Society of America A, 4(3), pages 519-524\n",
"\n",
"M. Turk and A. Pentland [Face Recognition using Eigenfaces](https://ieeexplore.ieee.org/document/139758/) Int. Conf. Computer Vision 1991, pages 103-108"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# PCA on Images\n",
"\n",
"We've seen PCA in 2D, and the maths remains the same in higher dimensions.\n",
"We'll use a small subset (500 images) from the aligned [CelebA](http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html) data set.\n",
"This is formed from another data set, [Labelled Faces in the Wild](http://vis-www.cs.umass.edu/lfw/) which contains images of celebrites, politicians, and so forth with ground truth identity information.\n",
"The aligned CelebA data set has the faces lined up so that the eyes are in the same place in the image, and all the images are the same size (\\\\(178 \\times 218\\\\) pixels).\n",
"Since these are colour images, each pixel has 3 values, and so we can think of each image as a point in \\\\(178 \\times 218 \\times 3 = 116,412\\\\) dimensional space.\n",
"\n",
"PCA proceeds much as in the 2D case, but we'll need a slight modification to cope with this very high-dimensional data.\n",
"\n",
"We first need to read the images in, and convert them to 1D vectors.\n"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"%matplotlib inline\n",
"import numpy\n",
"import matplotlib.pyplot as plt\n",
"import cv2\n",
"import math\n",
"\n",
"n = 500 # Number of samples\n",
"d = 178*218*3 # Number of dimensions (pixels) per sample\n",
"\n",
"F = numpy.zeros([n,d]) # faces, one per row\n",
"\n",
"for i in range(0,n):\n",
" imgFile = 'celebA/'+ format(i+1, '06d') + '.jpg'\n",
" image = cv2.imread(imgFile, cv2.IMREAD_COLOR)\n",
" F[i, :] = image.flatten()/255.0\n",
" cv2.imshow('Display', image)\n",
" cv2.waitKey(10)\n",
"\n",
"# Compute mean\n",
"m = numpy.mean(F, 0)\n",
"Z = F - m\n",
"\n",
"# Visualise the average image\n",
"meanImg = m.reshape(218, 178, 3)\n",
"cv2.imshow('Display', meanImg)\n",
"cv2.waitKey()\n",
"\n",
"cv2.destroyAllWindows()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Covariance Computation\n",
"\n",
"The next step is to compute the covariance,\n",
"\\\\[\n",
"C = \\frac{1}{n-1}Z^TZ\n",
"\\\\]\n",
"\n",
"However, with the high dimensionality of the data we have a problem. The matrix \\\\(Z\\\\) is \\\\(500 \\times 116,412\\\\), so \\\\(C\\\\) will be \\\\(116,412 \\times 116,412\\\\), or 13,551,753,744 values. \n",
"At 1 byte per value, that's 13.5 GB of storage - and even 4 byte floats would need nearly 55 GB to *store*, let alone compute eigenvectors from.\n",
"\n",
"## The 'Transpose Trick'\n",
"\n",
"Fortunately there is a way around this, which arises from the mathematical definition of an eigenvector.\n",
"If we ignore the factor of \\\\(\\frac{1}{n-1}\\\\) we have\n",
"\\\\[\n",
"\\begin{align}\n",
"Cv &= \\lambda v \\\\\n",
"Z^TZv &= \\lambda v\\\\\n",
"ZZ^TZv &= \\lambda Zv\\\\\n",
"ZZ^T(Zv) &= \\lambda (Zv)\n",
"\\end{align}\n",
"\\\\]\n",
"So if \\\\(v\\\\) is an eigenvector of \\\\(C\\\\) with eigenvalue \\\\(\\lambda\\\\), then \\\\(Zv\\\\) is an eigenvector of \n",
"\\\\[\n",
"C' = ZZ^T\n",
"\\\\]\n",
"with the same eigenvalue.\n",
"This means that we can swap our dimensions and our samples, and still do the same analysis.\n",
"\n",
"**Question: Why is it OK to ignore the factor \\\\(\\frac{1}{n-1}\\\\)? What factor should we use for \\\\(C'\\\\)?**\n",
"\n",
"This *transpose trick* is useful whenever you have more dimensions to your data than samples, which is almost always the case when your samples are images.\n",
"\n",
"The other change to the PCA presented eariler is that we use `eigh` rather than `eig`. \n",
"The `eigh` function has some specific properties:\n",
"- It only works for Hermetian or real symmetric square matrices\n",
"- It returns the eigenvalues in order from smallest to largest\n",
"\n",
"The covariance matrix is square and symmetric, so we can apply `eigh`, and the ordering is useful since `eig` is not guaranteed to return the values in any particular order. \n",
"We do, however, have to reverse the order of the results.\n",
"\n",
"Once we have a vector, \\\\(V\\\\), of eigenvectors, the eigenfaces are found as the rows of the matrix\n",
"\\\\[\n",
"U = V^TZ,\n",
"\\\\]\n",
"and these are normalised so that they are unit vectors."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"C = numpy.matmul(Z, numpy.transpose(Z))/(n-1)\n",
"\n",
"eVal, eVec = numpy.linalg.eigh(C)\n",
"\n",
"# Reverse the order so they go from largest to smallest\n",
"e = eVal[::-1]\n",
"V = eVec[:,::-1]\n",
"\n",
"# Eigen faces - need normalising\n",
"U = numpy.matmul(numpy.transpose(V), Z)\n",
"for i in range(0,n):\n",
" U[i,:] /= numpy.linalg.norm(U[i,:])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The matrix \\\\(U\\\\) has 500 rows, each of which can be interpreted as an image (with approprite scaling).\n",
"These are the principal components of the face data set, and are the Eigenfaces that give the method its name."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"for i in range(0,10):\n",
" ef = U[i,:].reshape(218, 178, 3)/max(abs(U[i,:])) + 0.5\n",
" cv2.imshow('EigenFaces', ef)\n",
" cv2.waitKey()\n",
" \n",
"cv2.destroyAllWindows()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"A given face image can be represented as the mean face plus a weighted sum of the eigenfaces (each row of the matrix being an eigenface). The weights are given by\n",
"\\\\[\n",
"w = U(f-\\mu)\n",
"\\\\]\n",
"where \\\\(U\\\\) is the matrix of eigenfaces, \\\\(f\\\\) is the face as a vector, and \\\\(\\mu\\\\) is the mean face vector.\n",
"These weights can be used as a low-dimensional representation of the faces, and we can choose to discard those corresponding to small egienvalues to further reduce the space.\n",
"\n",
"For our training faces, \\\\(f - \\mu\\\\) corresponds to the rows of \\\\(Z\\\\). We can see this by reconstructing a face from the database:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"f = 17 # Index of a face in the database\n",
"\n",
"original = (Z[f,:] + m).reshape(218, 178, 3)\n",
"cv2.imshow('Original', original)\n",
"cv2.waitKey()\n",
"\n",
"w = numpy.matmul(U, Z[f,:])\n",
"\n",
"recon = m.copy()\n",
"cv2.imshow('Reconstruction', recon.reshape(218, 178, 3))\n",
"cv2.waitKey()\n",
" \n",
"for i in range(0,n):\n",
" recon += w[i]*U[i, :]\n",
" cv2.imshow('Reconstruction', recon.reshape(218, 178, 3))\n",
" cv2.waitKey(100)\n",
" \n",
"cv2.waitKey()\n",
" \n",
"\n",
"cv2.destroyAllWindows()"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"cv2.destroyAllWindows()"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(500, 116412)"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"U.shape"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}