Hide code cell content
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
import matplotlib as mp
import sklearn
from IPython.display import Image

from sklearn.neighbors import KNeighborsClassifier 
import sklearn.datasets as datasets
%matplotlib inline

\(k\)-Nearest Neighbors#

Today we’ll expand our repetoire of classification techniques.

In so doing we’ll look at a first example of a new kind of model: nonparametric.

Parametric vs. Nonparametric Models#

There are many ways to define models (whether supervised or unsupervised).

However a key distinction is this: does the model have a fixed number of parameters, or does the number of parameters grow with the training data?

If the model has a fixed number of parameters, it is called parametric.

If the number of parameters grows with the data, the model is called nonparametric.

Parametric models have

  • the advantage of often being faster to use,

  • but the disadvantage of making strong assumptions about the nature of data distributions.

Nonparametric models are

  • more flexible,

  • but can be computationally intractable for large datasets.

The classic example of a nonparametric classifier is called \(k\)-Nearest Neighbors.

Hide code cell content
from matplotlib.colors import ListedColormap
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF'])
cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF'])

\(k\)-Nearest Neighbors#

When I see a bird that walks like a duck and swims like a duck and quacks like a duck, I call that bird a duck.

–James Whitcomb Riley (1849 - 1916)

_images/L15-duck.jpg

Like any classifier, \(k\)-Nearest Neighbors is trained by providing it a set of labeled data.

However, at training time, the classifier does very little. It just stores away the training data.

Hide code cell source
demo_y = [1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0]
demo_X = np.array([[-3,1], [-2, 4], [-2, 2], [-1.5, 1], [-1, 3], [0, 0], [1, 1.5], [2, 0.5], [2, 3], [2, 0], [3, 1], [4, 4], [0, 1]])
test_X = [-0.3, 0.7]
#
plt.scatter(demo_X[:,0], demo_X[:,1], c=demo_y, cmap=cmap_bold)
plt.axis('equal')
plt.axis('off')
plt.title('Training Points: 2 Classes');
_images/7e1a3c1561d426d05681d84e4a9ecfd77b90958ac6db5bc067a5f55cd90f6981.png

The idea of the \(k\)-Nearest Neighbors classifier is that, at test time, it simply “looks at” the \(k\) points in the training set that are nearest to the test input \(x\), and makes a decision based on the labels on those points.

By “nearest” we usually mean in Euclidean distance.

Hide code cell source
plt.scatter(demo_X[:,0], demo_X[:,1], c=demo_y, cmap=cmap_bold)
plt.plot(test_X[0], test_X[1], 'ok')
plt.annotate('Test Point', test_X, [75, 25], 
             textcoords = 'offset points', fontsize = 14, 
             arrowprops = {'arrowstyle': '->'})
plt.axis('equal')
plt.axis('off')
plt.title('Training Points: 2 Classes');
_images/af47e9602b7964b34a599fc8d8042d814bb962af653fd58c769c4e5bc9498a65.png
Hide code cell source
plt.scatter(demo_X[:,0], demo_X[:,1], c=demo_y, cmap=cmap_bold)
plt.plot(test_X[0], test_X[1], 'ok')
ax=plt.gcf().gca()
circle = mp.patches.Circle(test_X, 0.5, facecolor = 'red', alpha = 0.2)
plt.axis('equal')
plt.axis('off')
ax.add_artist(circle)
plt.title('1-Nearest-Neighbor: Classification: Red');
_images/deb333da0dfbc23af3eb83d2ea2dd3a3d25afbc40b07aedb6fc2a79346ecbfc1.png
Hide code cell source
plt.scatter(demo_X[:,0], demo_X[:,1], c=demo_y, cmap=cmap_bold)
test_X = [-0.3, 0.7]
plt.plot(test_X[0], test_X[1], 'ok')
ax=plt.gcf().gca()
    #ellipse = mp.patches.Ellipse(gmm.means_[clus], 3 * e[0], 3 * e[1], angle, color = 'r')
circle = mp.patches.Circle(test_X, 0.9, facecolor = 'gray', alpha = 0.3)
plt.axis('equal')
plt.axis('off')
ax.add_artist(circle)
plt.title('2-Nearest-Neighbor');
_images/86f0da6589c6af5e50fc17b31f07f2e8537bcbb0b85ac34f269c77800e5e1bbe.png
Hide code cell source
plt.figure()
ax=plt.gcf().gca()
    #ellipse = mp.patches.Ellipse(gmm.means_[clus], 3 * e[0], 3 * e[1], angle, color = 'r')
circle = mp.patches.Circle(test_X, 1.4, facecolor = 'blue', alpha = 0.2)
ax.add_artist(circle)
plt.scatter(demo_X[:,0], demo_X[:,1], c=demo_y, cmap=cmap_bold)
test_X = [-0.3, 0.7]
plt.plot(test_X[0], test_X[1], 'ok')
plt.axis('equal')
plt.axis('off')
plt.title('3-Nearest-Neighbor: Classification: Blue');
_images/6c06e4be33f20b26df67c3f8a2256d89948ef1c18015d046dc54d004a7ac196c.png

Note that \(k\)-Nearest Neighbors can do either hard or soft classification.

As a hard classifier, it returns the majority vote of the labels on the \(k\) Nearest Neighbors.

Which may be indeterminate, as above.

It is also reasonable to weight the votes of neighborhood points according to their distance from \(x\).

As a soft classifier it returns:

\[ p(x = c\,|\,\mathbf{x}, k) = \frac{\text{number of points in neighborhood with label } c}{k} \]

Model Selection for \(k\)-NN#

Each value of \(k\) results in a different model.

The complexity of the resulting model is therefore controlled by the hyperparameter \(k\).

Hence we will want to select \(k\) using held-out data to avoid overfitting.

Consider this dataset where items fall into three classes:

Hide code cell source
import sklearn.datasets as sk_data
X, y = sk_data.make_blobs(n_samples=150, 
                          centers=[[-2, 0],[1, 5], [2.5, 1.5]],
                          cluster_std = [2, 2, 3],
                          n_features=2,
                          center_box=(-10.0, 10.0),random_state=0)
plt.figure(figsize = (5,5))
plt.axis('equal')
plt.axis('off')
plt.scatter(X[:,0], X[:,1], c = y, cmap = cmap_bold, s = 80);
_images/cafb6a02ec859bdbc40e6a34d050d4899fd22902b4d3b63e6d0db3d713ff37bb.png

Let’s observe how the complexity of the resulting model changes as we vary \(k\).

We’ll do this by plotting the decision regions. These show how the method would classify each potential test point in the space.

Hide code cell source
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, x_max]x[y_min, y_max].
h = .1  # step size in the mesh
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
                      np.arange(y_min, y_max, h))
Hide code cell source
f, axs = plt.subplots(1, 3, figsize=(15, 5))
for i, k in enumerate([1, 5, 25]):
    knn = KNeighborsClassifier(n_neighbors = k)
    knn.fit(X, y)
    Z = knn.predict(np.c_[xx.ravel(), yy.ravel()])
    Z = Z.reshape(xx.shape)
    axs[i].pcolormesh(xx, yy, Z, cmap = cmap_light, shading = 'auto')
    axs[i].axis('equal')
    axs[i].axis('off')
    axs[i].set_title(f'Decision Regions for $k$ = {k}');
_images/f9fb5536589de32c89b869c928153c54215e86eb9d0f11052594109822ef5c0b.png

Notice how increasing \(k\) results in smoother decision boundaries.

These are more likely to show good generalization ability.

Challenges for \(k\)-NN#

Working with a \(k\)-NN classifier can involve some challenges.

  1. First and foremost, the computational cost of classification grows with the size of the training data. (Why?) While certain data structures may help, essentially the classification time grows linearly with the data set size.

Note the tradeoff here: the training step is trivial, but the classification step can be prohibitively expensive.

  1. Second, since Euclidean distance is the most common distance function used, data scaling is important.

As previously discussed, features should be scaled to prevent distance measures from being dominated by a small subset of features.

  1. Third concerns the curse of dimensionality.

If training data lives in a high dimensional space, Euclidean distance measures become less effective.

This is subtle but important, so we will now look at the curse of dimensionality more closely.

Figure

The Curse of Dimensionality#

The Curse of Dimensionality is a somewhat tongue in cheek term for serious problems that arise when we use geometric algorithms in high dimensions.

There are various aspects of the Curse that affect \(k\)-NN.

1. Points are far apart in high dimension.

\(k\)-NN relies on there being one or more “close” points to the test point \(x\).

In other words, we need the training data to be relatively dense, so there are “close” points everywhere.

Unfortunately, the amount of space we work in grows exponentially with the dimension \(d\).

So the amount of data we need to maintain a given density also grows exponentially with dimension \(d\).

Hence, points in high-dimensional spaces tend not to be close to one another at all.

One very intuitive way to think about it is this:

In order for two points to be close in \(\mathbb{R}^d\), they must be close in each of the \(d\) dimensions.

As the number of dimensions grows, it becomes harder and harder for a pair of points to be close in each dimension.

2. Points tend to all be at similar distances in high dimension.

This one is a little harder to visualize. We’ll use formulas instead to guide us.

Let’s say points are uniformly distributed in space, so that number of points in a region is proportional to the region’s volume.

How does volume relate to distance as dimension \(d\) grows?

Consider you are at some point in space (say, the test point \(x\)), and you want to know how many points are within a unit distance from you.

This is proportional to the volume of a hypersphere with radius 1.

Now, the volume of a hypersphere is \(k_d \,r^d\).

For each \(d\) there is a different \(k_d\).

  • For \(d = 2\), \(k_d\) is \(4\pi\), and

  • for \(d = 3\), \(k_d\) is 4/3 \(\pi\),

  • etc.

Hide code cell source
ax = plt.figure(figsize = (7,7)).add_subplot(projection = '3d')
# coordinates of sphere surface
u, v = np.mgrid[0:2*np.pi:50j, 0:np.pi:50j]
x = np.cos(u)*np.sin(v)
y = np.sin(u)*np.sin(v)
z = np.cos(v)
#
ax.plot_surface(x, y, z, color='r', alpha = 0.3)
s3 = 1/np.sqrt(3)
ax.quiver(0, 0, 0, s3, s3, s3, color = 'b')
ax.text(s3/2, s3/2, s3/2-0.2, 'r', size = 14)
ax.set_axis_off()
plt.title('Hypersphere in $d$ dimensions\nVolume is $k_d \,r^d$');
_images/cedb9f720b21a65f3356fe6bc4609fd181c0fa1fc700694687c48ddb6188ff69.png

Let’s also ask how many points are within a slightly smaller distance, let’s say 0.99.

The new distance can be thought of as \(1 - \epsilon\) for some small \(\epsilon\).

The number of points then of course is proprtional to \(k_d (1-\epsilon)^d\)

Now, what is the fraction \(f_d\) of all the points that are within a unit distance, but not within a distance of 0.99?

(That is, not within the the hypersphere with radius \(1-\epsilon\))?

This is

\[ f_d = \frac{k_d 1^d - k_d (1-\epsilon)^d}{k^d 1^d} \]
Hide code cell source
ax = plt.figure(figsize = (7,7)).add_subplot(projection = '3d')
# coordinates of sphere surface
u, v = np.mgrid[0:2*np.pi:50j, 0:np.pi:50j]
x = np.cos(u)*np.sin(v)
y = np.sin(u)*np.sin(v)
z = np.cos(v)
#
ax.plot_surface(x, y, z, color='r', alpha = 0.2)
s3 = 1/np.sqrt(3)
ax.quiver(0, 0, 0, s3, s3, s3, color = 'b')
ax.text(s3/2, s3/2, s3/2-0.2, '1', size = 14)
#
eps = 0.9
#
ax.plot_surface(eps * x, eps * y, eps * z, color='b', alpha = 0.2)
ax.quiver(0, 0, 0, eps, 0, 0, color = 'k')
ax.text(1/2-0.2, 0, -0.4, r'$1-\epsilon$', size = 14)
ax.set_axis_off()
plt.title('Inner and Outer Hyperspheres');
_images/9faf73e7f18be607c6a19f22fdac8083a17e0cb85b4371aab2efd2a51b0c71d4.png

Now, \((1-\epsilon)^d\) goes to 0 as \(d \rightarrow \infty\).

So, \(f_d\) goes to 1 as \(d \rightarrow \infty\).

Which means: in the limit of high \(d\), all of the points that are within 1 unit of our location, are almost exactly 1 unit from our location!

Hide code cell source
ax = plt.figure(figsize = (7,7)).add_subplot(projection = '3d')
# coordinates of sphere surface
u, v = np.mgrid[0:2*np.pi:50j, 0:np.pi:50j]
x = np.cos(u)*np.sin(v)
y = np.sin(u)*np.sin(v)
z = np.cos(v)
#
ax.plot_surface(x, y, z, color='r', alpha = 0.2)
s3 = 1/np.sqrt(3)
ax.quiver(0, 0, 0, s3, s3, s3, color = 'b')
ax.text(s3/2, s3/2, s3/2-0.2, '1', size = 14)
#
eps = 0.9
#
ax.plot_surface(eps * x, eps * y, eps * z, color='b', alpha = 0.2)
ax.quiver(0, 0, 0, eps, 0, 0, color = 'k')
ax.text(1/2-0.2, 0, -0.4, r'$1-\epsilon$', size = 14)
ax.set_axis_off()
plt.title('In high-$d$, All Points Lie in Outer Shell');
_images/5f70b5adda7c3744ec3db89b94296226b684d901eb4f1e7aea3d3a33a7d7aa23.png

Let’s demonstrate this effect in practice.

What we will do is create 100 points, scattered at random within a \(d\)-dimensional space.

We will look at two quantities:

  • The minimum distance between any two points, and

  • The average distance between any two points.

as we vary \(d\).

Hide code cell source
import sklearn.metrics as metrics

nsamples = 1000
unif_X = np.random.default_rng().uniform(0, 1, nsamples).reshape(-1, 1)
euclidean_dists = metrics.euclidean_distances(unif_X)
# extract the values above the diagonal
dists = euclidean_dists[np.triu_indices(nsamples, 1)]
mean_dists = [np.mean(dists)]
min_dists = [np.min(dists)]
for d in range(2, 101):
    unif_X = np.column_stack([unif_X, np.random.default_rng().uniform(0, 1, nsamples)])
    euclidean_dists = metrics.euclidean_distances(unif_X)
    dists = euclidean_dists[np.triu_indices(nsamples, 1)]
    mean_dists.append(np.mean(dists))
    min_dists.append(np.min(dists))
Hide code cell source
plt.plot(min_dists, label = "Minimum Distance")
plt.plot(mean_dists, label = "Average Distance")
plt.xlabel(r'Number of dimensions ($d$)')
plt.ylabel('Distance')
plt.legend(loc = 'best')
plt.title(f'Comparison of Minimum Versus Average Distance Between {nsamples} Points\nAs Dimension Grows');
_images/2bf966779ba45781acef2c7253ee6a33c56d2576a518f8b31913e826604627e1.png

The average distance between points grows, but it seems that the minimum distance between points grows about as fast.

So the ratio of the minimum distance to the average distance grows as well!

Let’s look at that ratio:

Hide code cell source
plt.plot([a/b for a, b in zip(min_dists, mean_dists)])
plt.xlabel(r'Number of dimensions ($d$)')
plt.ylabel('Ratio')
plt.title(f'Ratio of Minimum to Average Distance Between {nsamples} Points\nAs Dimension Grows');
_images/2e1d6b28a69df720fbf886ca0bfe679b86a85865f36118fd21c72f12e9bb07fb.png

This shows that, for any test point \(x\), the distance to the closest point to \(x\), relatively speaking, gets closer and closer to the average distance between points.

Of course, if we used a point at the average distance for classifying \(x\), we’d get a very poor classifier.

Implications of the Curse.

For \(k\)-means, the Curse of Dimensionality means that in high dimension, most points are nearly the same distance from the test point.

This makes \(k\)-means ineffective: it cannot reliably tell which are the \(k\) nearest neighbors, and its performance degrades.

What Can be Done?

The problem is that you simply cannot have enough data to do a good job using \(k\)-NN in high dimensions.

If you must use \(k\)-NN for your task, the only option may be to reduce the dimension of your data.

Surprisingly, this can often be done at little cost in accuracy.

We will discuss dimensionality reduction techniques at length later in the course.

In Practice#

Next we’ll look at two classification methods in practice:

  • Decision Trees, and

  • k-Nearest Neighbors.

To compare these methods, the question arises:

How do we evaluate a classifier?#

In the simple case of a binary classifier, we can call one class the ‘Positive’ class and one the ‘Negative’ class.

The most basic measure of success for a classifer is accuracy: what fraction of test points are correctly classified?

Of course, accuracy is important, but it can be too simplistic at times.

For example, let’s say we have a dataset showing class imbalance: for example 90% of the data are the Positive class and 10% are the Negative class.

For this dataset, consider a classifier that always predicts ‘Positive’. Its accuracy is 90%, but it is a very ‘stupid’ classifier! (ie, it could be one line of code: print(Positive)!)

A better way to measure the classifier’s performance is using a Confusion Matrix:

Figure

Diagonal elements represent successes, and off diagonals represent errors.

Using the confusion matrix we can define some more useful measures:

  • Recall - defined as the fraction of actual positives correctly classified:

    • TP/(TP + FN)

  • Precision - defined as the fraction of classified positives correctly classified:

    • TP/(TP + FP)

Evaluating \(k\)- Nearest Neighbors and Decision Trees#

First we’ll generate some synthetic data to work with.

X, y = datasets.make_circles(noise=.1, factor=.5, random_state=1)
print('Shape of data: {}'.format(X.shape))
print('Unique labels: {}'.format(np.unique(y)))
Shape of data: (100, 2)
Unique labels: [0 1]

Here is what the data looks like:

Hide code cell source
plt.figure(figsize = (6,6))
plt.prism()  # this sets a nice color map
plt.scatter(X[:, 0], X[:, 1], c=y, s = 80)
plt.axis('off')
plt.axis('equal');
_images/037db81e67dccaa672c2de750cd5694b6f463d99e4035f9301f19c70c3b0a702.png

Recall that we always want to test on data separate from our training data.

For now, we will something very simple: take the first 50 examples for training and the rest for testing. (Later we will do this a better way.)

X_train = X[:50]
y_train = y[:50]
X_test = X[50:]
y_test = y[50:]
Hide code cell source
fig_size = (12, 5)
Hide code cell source
plt.figure(figsize = fig_size)
plt.subplot(1, 2, 1)
plt.scatter(X_train[:, 0], X_train[:, 1], c = y_train, s = 80)
plt.axis('equal')
plt.axis('off')
plt.title('Training Data')

plt.subplot(1, 2, 2)
plt.scatter(X_test[:, 0], X_test[:, 1], c = y_test, s = 80)
plt.title('Test Data')
plt.axis('equal')
plt.axis('off');
_images/f54571f5ed3be4ce4fc3db66fdc06088dd23581dc8122b33f50d6279e984985b.png

For our first example, we will classify the points (in the two classes) using a k-nn classifier.

We will specify that \(k=5\), i.e., we will classify based on the majority vote of the 5 nearest neighbors.

k = 5
knn5 = KNeighborsClassifier(n_neighbors = k)    

In the context of supervised learning, the scikit-learn fit() function corresponds to training and the predict() function corresponds to testing.

knn5.fit(X_train,y_train)
print(f'Accuracy on test data: {knn5.score(X_test, y_test)}')
Accuracy on test data: 0.72

Accuracy of 72% sounds good – but let’s dig deeper.

We’ll call the red points the Positive class and the green points the Negative class.

Here is the confusion matrix:

Hide code cell source
y_pred_test = knn5.predict(X_test)
pd.DataFrame(metrics.confusion_matrix(y_test, y_pred_test), 
             columns = ['Predicted +', 'Predicted -'], 
             index = ['Actual +', 'Actual -'])
Predicted + Predicted -
Actual + 14 14
Actual - 0 22

Looks like the classifier is getting all of the Negative class correct, but only achieving accuracy of 50% on the Positive class.

That is, its precision is 100%, but its recall is only 50%.

Let’s visualize the results.

Hide code cell source
k = 5

plt.figure(figsize = fig_size)
plt.subplot(1, 2, 1)

plt.scatter(X_train[:, 0], X_train[:, 1], c = y_train, s = 80)
plt.axis('equal')
plt.title('Training')
plt.axis('off')

plt.subplot(1, 2, 2)
plt.scatter(X_test[:, 0], X_test[:, 1], c = y_pred_test, s = 80)
plt.title(f'Testing $k$={k}\nAccuracy: {knn5.score(X_test, y_test)}')
plt.axis('off')
plt.axis('equal');
_images/48a46cc694669d9738b5db1ca59a7714f514b53bcd3a73a2a983fd54a9bc6319.png

Indeed, the Positive (red) points in the upper half of the test data are all classified incorrectly.

Let’s look at one of the points that the classifier got wrong:

Hide code cell source
k=5 
test_point = np.argmax(X_test[:,1])
neighbors = knn5.kneighbors([X_test[test_point]])[1]

plt.figure(figsize = fig_size)
plt.subplot(1, 2, 1)
plt.scatter(X_train[:, 0], X_train[:, 1], c = y_train, s = 80)
plt.scatter(X_train[neighbors,0], X_train[neighbors,1],
            c = y_train[neighbors], marker='o', 
            facecolors='none', edgecolors='b', s = 80)
radius = np.max(metrics.euclidean_distances(X_test[test_point].reshape(1, -1), X_train[neighbors][0]))
ax = plt.gcf().gca()
circle = mp.patches.Circle(X_test[test_point], radius, facecolor = 'blue', alpha = 0.2)
ax.add_artist(circle)
plt.axis('equal')
plt.axis('off')
plt.title(r'Training')

plt.subplot(1, 2, 2)
plt.scatter(X_test[:, 0], X_test[:, 1], c = y_pred_test, s = 80)
plt.scatter(X_test[test_point,0], X_test[test_point,1], marker='o', 
            facecolors='none', edgecolors='b', s = 80)
plt.title('Testing $k$={}\nAccuracy: {}'.format(k,knn5.score(X_test, y_test)))
plt.axis('equal')
plt.axis('off');
_images/36dc80526e94072f5a7b83e9cfd95f92f76adb9be1990998bb1699210aa647e8.png

For comparison purposes, let’s try \(k\) = 3.

Hide code cell source
k = 3
knn3 = KNeighborsClassifier(n_neighbors=k)    
knn3.fit(X_train,y_train)
y_pred_test = knn3.predict(X_test)

plt.figure(figsize = fig_size)
plt.subplot(1, 2, 1)
plt.scatter(X_train[:, 0], X_train[:, 1], c=y_train, s = 80)
plt.axis('equal')
plt.axis('off')
plt.title(r'Training')

plt.subplot(1, 2, 2)
plt.scatter(X_test[:, 0], X_test[:, 1], c=y_pred_test, s = 80)
plt.title(f'Testing $k$={k}\nAccuracy: {knn3.score(X_test, y_test)}')
plt.axis('off')
plt.axis('equal');
_images/15f2fb64dd7811ad14fc4ce7b2a1ee2eb3b4e79b085a1d4d54383b81176e7c80.png

And let’s look at the same individual point as before:

Hide code cell source
k = 3
test_point = np.argmax(X_test[:,1])
X_test[test_point]
neighbors = knn3.kneighbors([X_test[test_point]])[1]

plt.figure(figsize = fig_size)
plt.subplot(1, 2, 1)
plt.scatter(X_train[:, 0], X_train[:, 1], c=y_train, s = 80)
plt.scatter(X_train[neighbors, 0], X_train[neighbors, 1], marker = 'o', 
            facecolors = 'none', edgecolors = 'b', s = 80)
radius = np.max(metrics.euclidean_distances(X_test[test_point].reshape(1, -1), 
                                            X_train[neighbors][0]))
ax = plt.gcf().gca()
circle = mp.patches.Circle(X_test[test_point], radius, facecolor = 'blue', alpha = 0.2)
ax.add_artist(circle)
plt.axis('equal')
plt.axis('off')
plt.title(r'Training')
plt.subplot(1, 2, 2)
plt.scatter(X_test[:, 0], X_test[:, 1], c = y_pred_test, s = 80)
plt.scatter(X_test[test_point,0], X_test[test_point,1], marker = 'o', 
            facecolors = 'none', edgecolors = 'b', s = 80)
plt.title(f'Testing $k$={k}\nAccuracy: {knn3.score(X_test, y_test)}')
plt.axis('off')
plt.axis('equal');
_images/8be09c5a5253847bc2a96d72e5e366359f751a8fabb3d9093652524b3556113a.png

So how confident can we be that the test accuracy is 92% in general?

What we really need to do is consider many different train/test splits.

Thus, the proper way to evaluate generalization ability (accuracy on the test data) is:

  1. Form a random train/test split

  2. Train the classifier on the training split

  3. Test the classifier on the testing split

  4. Accumulate statistics

  5. Repeat from 1. until enough statistics have been collected.

import sklearn.model_selection as model_selection

nreps = 50
kvals = range(1, 10)
acc = []
np.random.seed(4)
for k in kvals:
    test_rep = []
    train_rep = []
    for i in range(nreps):
        X_train, X_test, y_train, y_test = model_selection.train_test_split(X, 
                                                                            y, 
                                                                            test_size = 0.5)
        knn = KNeighborsClassifier(n_neighbors = k)    
        knn.fit(X_train, y_train)
        train_rep.append(knn.score(X_train, y_train))
        test_rep.append(knn.score(X_test, y_test))
    acc.append([np.mean(np.array(test_rep)), np.mean(np.array(train_rep))])
accy = np.array(acc)
Hide code cell source
plt.plot(kvals, accy[:, 0], '.-', label = 'Accuracy on Test Data')
plt.plot(kvals, accy[:, 1], '.-', label = 'Accuracy on Training Data')
plt.xlabel(r'$k$')
plt.ylabel('Accuracy')
plt.title('Train/Test Comparision of $k$-NN')
plt.legend(loc = 'best');
_images/2a3f9b88b52caeb51fa241d41759ef3d24fc20aee321a381a628fb490e84184e.png

Based on the generalization error (ie, accuracy on test (held-out) data), it looks like \(k = 2\) is the best choice.

Here is the decision boundary for \(k\)-NN with \(k = 2\).

Hide code cell content
x_min, x_max = X[:, 0].min() - .1, X[:, 0].max() + .1
y_min, y_max = X[:, 1].min() - .1, X[:, 1].max() + .1
plot_step = 0.02
xx, yy = np.meshgrid(np.arange(x_min, x_max, plot_step),
                     np.arange(y_min, y_max, plot_step))
Hide code cell source
np.random.seed(1)
X_train, X_test, y_train, y_test = model_selection.train_test_split(X, y, test_size = 0.5)

k = 2
knn = KNeighborsClassifier(n_neighbors = k)  
knn.fit(X_train, y_train)
y_pred_train = knn.predict(X_train)
y_pred_test = knn.predict(X_test)

Z = knn.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)

plt.figure(figsize = fig_size)
plt.subplot(1, 2, 1)
cs = plt.contourf(xx, yy, Z, cmap=plt.cm.Paired, alpha=0.3)
plt.scatter(X_train[:, 0], X_train[:, 1], c=y_train, s=30)
plt.axis('equal')
plt.axis('off')
plt.xlim((x_min, x_max))
plt.ylim((y_min, y_max))
plt.title(f'{k}-NN - Training Data\nAccuracy: {knn.score(X_train, y_train)}');

plt.subplot(1, 2, 2)
cs = plt.contourf(xx, yy, Z, cmap=plt.cm.Paired, alpha=0.3)
plt.scatter(X_test[:, 0], X_test[:, 1], c=y_test, s=30)
plt.axis('equal')
plt.axis('off')
plt.xlim((x_min, x_max))
plt.ylim((y_min, y_max))
plt.title(f'{k}-NN - Test Data\nAccuracy: {knn.score(X_test, y_test)}');
_images/cd4ca0ad0fb103394e236fcf8cefed0292557b85596658ffc2acbb2a6d6621db.png

Decision Tree#

Next, we’ll use a decision tree on the same data set.

import sklearn.tree as tree
dtc = tree.DecisionTreeClassifier(max_leaf_nodes = 5)

dtc.fit(X_train,y_train)
y_pred_test = dtc.predict(X_test)
print('DT accuracy on test data: ', dtc.score(X_test, y_test))
y_pred_train = dtc.predict(X_train)
print('DT accuracy on training data: ', dtc.score(X_train, y_train))
DT accuracy on test data:  0.94
DT accuracy on training data:  0.98
Hide code cell source
plt.figure(figsize = (6, 6))
plt.scatter(X_test[:, 0], X_test[:, 1], c=y_pred_test, marker='^', s=80)
plt.scatter(X_train[:, 0], X_train[:, 1], c=y_train, s=80)
plt.axis('equal')
plt.axis('off')
plt.title(F'Decision Tree\n Triangles: Test Data, Circles: Training Data\nAccuracy: {dtc.score(X_test, y_test)}');
_images/8617966da275f4ca783feeadd09005da574d4421009e25789007b1cf7a7b42dd.png

Let’s visualize the decision boundary of the Decision Tree.

Hide code cell source
Z = dtc.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)

plt.figure(figsize = (12, 6))
plt.subplot(1, 2, 1)
cs = plt.contourf(xx, yy, Z, cmap=plt.cm.Paired, alpha=0.3)
plt.scatter(X_train[:, 0], X_train[:, 1], c = y_train, s = 80)
plt.axis('equal')
plt.axis('off')
plt.xlim((x_min, x_max))
plt.ylim((y_min, y_max))
plt.title(f'Decision Tree - Training Data\nAccuracy: {dtc.score(X_train, y_train)}');

plt.subplot(1, 2, 2)
cs = plt.contourf(xx, yy, Z, cmap=plt.cm.Paired, alpha=0.3)
plt.scatter(X_test[:, 0], X_test[:, 1], c = y_pred_test, marker = '^', s = 80)
plt.axis('equal')
plt.axis('off')
plt.xlim((x_min, x_max))
plt.ylim((y_min, y_max))
plt.title(f'Decision Tree - Test Data\nAccuracy: {dtc.score(X_test, y_test)}');
_images/d423e7b4a0e979026fb95afad3c69d76a0e47458fa5aee5112940e834f185faf.png

Comparing \(k\)-NN and Decision Tree#

It appears that \(k\)-NN and a Decision Tree have approximately comparable performance on this dataset.

However - there is a difference in interpretability.

Interpretability is a big deal! It means the ability to explain why the classifier made the decision it did.

It can be relatively difficult to understand why \(k\)-NN is making a specific prediction. It depends on the data in the neighborhood of the test point.

On the other hand, the Decision Tree can be easily understood.

We sometimes use the terms “black box” for an uninterpretable classifier like \(k\)-NN, and “white box” for an interpretable classifier like DT.

Let’s see an example of the interpretability of the Decision Tree:

Hide code cell source
dot_data = tree.export_graphviz(dtc, out_file=None,
                         feature_names=['X','Y'],
                         class_names=['Red','Green'],
                         filled=True, rounded=True,  
                         special_characters=True) 
import pydotplus
graph = pydotplus.graph_from_dot_data(dot_data) 
# graph.write_pdf("dt.pdf") 
Image(graph.create_png())
_images/e30ca63cfc2fabcdef35479f05d85520861b25b361c58f69fb903991f917211c.png

Real Data#

To explore a few more issues, we’ll now turn to some famous datasets that have been extensively studied in the past.

The Iris Dataset#

The Iris dataset is a famous dataset used by Ronald Fisher in a classic 1936 paper on classification.

Figure

R. A. Fisher

By http://www.swlearning.com/quant/kohler/stat/biographical_sketches/Fisher_3.jpeg, Public Domain, https://commons.wikimedia.org/w/index.php?curid=4233489

Quoting from Wikipedia:

The data set consists of 50 samples from each of three species of Iris (Iris setosa, Iris virginica and Iris versicolor). Four features were measured from each sample: the length and the width of the sepals and petals, in centimetres. Based on the combination of these four features, Fisher developed a linear discriminant model to distinguish the species from each other.

I. setosa

I. setosa

I. versicolor

I. versicolor

I. virginica

I. virginica

iris = datasets.load_iris()
X = iris.data
y = iris.target
ynames = iris.target_names
print(X.shape, y.shape)
print(X[1,:])
print(iris.target_names)
print(y)
(150, 4) (150,)
[4.9 3.  1.4 0.2]
['setosa' 'versicolor' 'virginica']
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2
 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
 2 2]

First, we’ll explore setting hyperparameters.

We start with \(k\)-NN.

To set the hyperparameter \(k\), we evaluate error on the test set for many train/test splits:

kvals = range(2, 20)
nreps = 50

acc = []
std = []
np.random.seed(0)
for k in kvals:
    test_rep = []
    train_rep = []
    for i in range(nreps):
        X_train, X_test, y_train, y_test = model_selection.train_test_split(
            X, y, test_size = 0.33)
        knn = KNeighborsClassifier(n_neighbors = k)    
        knn.fit(X_train, y_train)
        train_rep.append(knn.score(X_train, y_train))
        test_rep.append(knn.score(X_test, y_test))
    acc.append([np.mean(np.array(test_rep)), np.mean(np.array(train_rep))])
    std.append([np.std(np.array(test_rep)), np.std(np.array(train_rep))])
Hide code cell source
accy = np.array(acc)
stds = np.array(std)/np.sqrt(nreps)
plt.plot(kvals, accy[:, 0], '.-', label = 'Accuracy on Test Data')
plt.plot(kvals, accy[:, 1], '.-', label = 'Accuracy on Training Data')
plt.xlabel('k')
plt.ylabel('Accuracy')
plt.xticks(kvals)
plt.legend(loc = 'best');
print(f'Max Test Accuracy at k = {kvals[np.argmax(accy[:, 0])]} with accuracy {np.max(accy[:, 0]):.03f}')
Max Test Accuracy at k = 13 with accuracy 0.969
_images/0882afa462eeb3befa3c4296ec1cf066208752699e7c11139dd4ee6e16f75814.png

Now, it looks llike \(k\) = 13 is the best-performing value of the hyperparameter.

Can we be sure?

Be careful! Each point in the above plot is the mean of 50 random train/test splits!

If we are going to be sure that \(k\) = 13 is best, then it should be be statistically distinguishable from the other values.

To make this call, let’s plot \(\pm 1 \sigma\) confidence intervals on the mean values.

(See the Probability Refresher for details on the proper formula.)

Hide code cell source
plt.errorbar(kvals, accy[:, 0], stds[:, 0], label = 'Accuracy on Test Data')
plt.xlabel('k')
plt.ylabel('Accuracy')
plt.legend(loc = 'lower center')
plt.xticks(kvals)
plt.title(r'Test Accuracy with $\pm 1\sigma$ Errorbars');
_images/35201d51a722bc401742959d2929e7ee6bc918fa6de29d44e0cf42706e627090.png

It looks like \(k\) = 13 is a reasonable value,

although a case can be made that 9 and 11 are not statistically distinguishable from 13.

To gain insight onto the complexity of the model for \(k\) = 13, let’s look at the decision boundary.

Note that we will re-run the classifier using only two (of four) features, so we can visualize.

Hide code cell source
# Create color maps
from matplotlib.colors import ListedColormap
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF'])
cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF'])

# we will use only the first two (of four) features, so we can visualize
X = X_train[:, :2] 
h = .02  # step size in the mesh
k = 13
knn = KNeighborsClassifier(n_neighbors=k)
knn.fit(X, y_train)
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, x_max]x[y_min, y_max].
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
                      np.arange(y_min, y_max, h))
Z = knn.predict(np.c_[xx.ravel(), yy.ravel()])

# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure()
plt.pcolormesh(xx, yy, Z, cmap=cmap_light, shading='auto')

# Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=y_train, cmap=cmap_bold)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.title(f"3-Class $k$-NN classification ($k$ = {k})");
_images/09a3ea6a5d4e3dbc9e3095b8a830dd7289f5d9f91cb80f12e60f582dc1561960.png

There are a few artifacts, but overall this looks like a reasonably smooth set of decision boundaries.

Now we’ll compare to a decision tree.

How do we control the complexity of a Decision Tree?

There are a variety of ways (see the sklearn documentation) but the simplest one is to control the number of leaf nodes in the tree.

A small number of leaf nodes is a low-complexity model, and a large number of nodes is a high-complexity model.

Hide code cell source
X = iris.data
y = iris.target
Hide code cell source
leaf_vals = range(3, 20)
nreps = 50

acc = []
std = []
np.random.seed(0)
for leaf_count in leaf_vals:
    test_rep = []
    train_rep = []
    for i in range(nreps):
        X_train, X_test, y_train, y_test = model_selection.train_test_split(X, y, test_size = 0.10)
        dtc = tree.DecisionTreeClassifier(max_leaf_nodes = leaf_count)   
        dtc.fit(X_train, y_train)
        train_rep.append(dtc.score(X_train, y_train))
        test_rep.append(dtc.score(X_test, y_test))
    acc.append([np.mean(np.array(test_rep)), np.mean(np.array(train_rep))])
    std.append([np.std(np.array(test_rep)), np.std(np.array(train_rep))])
accy = np.array(acc)
stds = np.array(std)/np.sqrt(nreps)
Hide code cell source
plt.plot(leaf_vals, accy[:, 0], '.-', label = 'Accuracy on Test Data')
plt.plot(leaf_vals, accy[:, 1], '.-', label = 'Accuracy on Training Data')
plt.xlabel('Max Leaf Nodes')
plt.ylabel('Accuracy')
plt.legend(loc = 'best')
plt.xticks(leaf_vals)
best_leaf = leaf_vals[np.argmax(accy[:, 0])]
plt.title(f'Test/Train Error for Decision Tree\nMax Test Accuracy at {best_leaf} leaf nodes with accuracy {np.max(accy[:, 0]):.03f}');
_images/de9bc71c77ee1afd0c83b9ce89c6fa50ce60b8c7d39976a448f17795641f8d95.png
Hide code cell source
plt.errorbar(leaf_vals, accy[:, 0], stds[:, 0], label = 'Accuracy on Test Data')
plt.xlabel('Max Leaf Nodes')
plt.ylabel('Accuracy')
plt.legend(loc = 'lower center')
plt.xticks(leaf_vals)
plt.title(r'Test Accuracy with $\pm 1\sigma$ Errorbars');
_images/ae49bdd31fc2a6143a29e3542812c01906158f4dd3bb6b7ea9b5a4c8ecf11ac4.png

It looks like 9 leaf nodes is appropriate, but we would be justified to choose 4 or 13 as well.

And now let’s visualize the decision boundary for the DT:

Hide code cell source
# we will use only the first two (of four) features, so we can visualize
X = X_train[:, :2] 
h = .02  # step size in the mesh
dtc = tree.DecisionTreeClassifier(max_leaf_nodes = best_leaf) 
dtc.fit(X, y_train)
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, x_max]x[y_min, y_max].
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
                      np.arange(y_min, y_max, h))
Z = dtc.predict(np.c_[xx.ravel(), yy.ravel()])

# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure()
plt.pcolormesh(xx, yy, Z, cmap=cmap_light, shading='auto')

# Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=y_train, cmap=cmap_bold)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.title(f"3-Class DT Classification\n{best_leaf} leaf nodes");
_images/20e8b0d5ecac2338671dc6f3f22e76f1717fc478aa170635c2e2a29de0817a24.png

MNIST dataset#

NIST used to be called the “National Bureau of Standards.” These are the folks who bring you the reference meter, reference kilogram, etc.

NIST constructed datasets for machine learning of handwritten digits. These were collected from Census Bureau employees and also from high-school students.

These data have been used repeatedly for many years to evaluate classifiers. For a peek at some of the work done with this dataset you can visit http://yann.lecun.com/exdb/mnist/.

import sklearn.utils as utils

digits = datasets.load_digits()
X, y = utils.shuffle(digits.data, digits.target, random_state = 1)

print ('Data shape: {}'.format(X.shape))
print ('Data labels: {}'.format(y))
print ('Unique labels: {}'.format(digits.target_names))
Data shape: (1797, 64)
Data labels: [1 5 0 ... 9 1 5]
Unique labels: [0 1 2 3 4 5 6 7 8 9]

An individual item is an \(8 \times 8\) image, encoded as a matrix:

digits.images[3]
array([[ 0.,  0.,  7., 15., 13.,  1.,  0.,  0.],
       [ 0.,  8., 13.,  6., 15.,  4.,  0.,  0.],
       [ 0.,  2.,  1., 13., 13.,  0.,  0.,  0.],
       [ 0.,  0.,  2., 15., 11.,  1.,  0.,  0.],
       [ 0.,  0.,  0.,  1., 12., 12.,  1.,  0.],
       [ 0.,  0.,  0.,  0.,  1., 10.,  8.,  0.],
       [ 0.,  0.,  8.,  4.,  5., 14.,  9.,  0.],
       [ 0.,  0.,  7., 13., 13.,  9.,  0.,  0.]])

Let’s show the matrix as an image:

Hide code cell source
plt.gray() 
# plt.rc('axes', grid = False);
plt.matshow(digits.images[3], cmap = plt.cm.gray_r);
<Figure size 640x480 with 0 Axes>
_images/bcb947dd09c3d2691eb1d28dacb0edd310120529734164a296f64fb20eb85b87.png

It is easier to visualize if we blur the pixels a little bit.

Hide code cell source
plt.rc('image', cmap = 'binary', interpolation = 'bilinear')
plt.figure(figsize = (4, 4))
plt.axis('off')
plt.imshow(digits.images[3]);
_images/549dfae8cfd04142bc024d8822c29d5bd7b7a55150c441fa79449d1e642739bc.png

Here are some more samples from the dataset:

Hide code cell source
for t in range(4):
    plt.figure(figsize = (8, 2))
    for j in range(4):
        plt.subplot(1, 4, 1 + j)
        plt.imshow(X[4*t + j].reshape(8, 8))
        plt.axis('off');
_images/219c953497ffcaea0fb3484973f943015ee4895ad453eab47a86c04325565e07.png _images/d3fa2db197ea2b852a6fbb992dc6bfcf61b84b4123233f3bcf54fa860bf9a18b.png _images/aef4e9d7b89f6aab0f01be9f8562268f104df6be07206db665691704bcf34736.png _images/be006775ac6999d67c2c1c730263e831ad24b2f1e934d282c38aff635bf8dd97.png

Although this is an 8 \(\times\) 8 image, we can just treat it as a vector of length 64.

To do model selection, we will again average over many train/test splits.

However, recall one issue of \(k\)-NN: it can be slow on a large training set (why?).

We may thus decide to limit the size of our testing set to speed up testing.

How does the train/test split affect results?

Let’s consider two cases:

  1. Train: 90% of data, Test: 10% of data

  2. Train: 67% of data, Test: 33% of data

def test_knn(kvals, test_fraction, nreps):
    acc = []
    std = []
    np.random.seed(0)
    #
    for k in kvals:
        test_rep = []
        train_rep = []
        for i in range(nreps):
            X_train, X_test, y_train, y_test = model_selection.train_test_split(
                X, y, test_size = test_fraction)
            knn = KNeighborsClassifier(n_neighbors = k)    
            knn.fit(X_train, y_train)
            test_rep.append(knn.score(X_test, y_test))
        acc.append(np.mean(np.array(test_rep)))
        std.append(np.std(np.array(test_rep)))
    return(np.array(acc), np.array(std)/np.sqrt(nreps))

test_fraction1 = 0.33
accy1, stds1 = test_knn(range(2, 20), test_fraction1, 50)
test_fraction2 = 0.10
accy2, stds2 = test_knn(range(2, 20), test_fraction2, 50)
Hide code cell source
plt.figure(figsize = (7, 5))
plt.errorbar(kvals, accy1, stds1, 
             label = f'{test_fraction1:.0%} Used for Testing; accuracy {np.max(accy1):.03f}')
plt.errorbar(kvals, accy2, stds2, 
             label = f'{test_fraction2:.0%} Used for Testing; accuracy {np.max(accy2):.03f}')
plt.xlabel('k')
plt.ylabel('Accuracy')
plt.legend(loc = 'best')
plt.xticks(kvals)
best_k = kvals[np.argmax(accy1)]
plt.title(f'Test Accuracy with $\pm 1\sigma$ Error Bars');
_images/883e302b0d1be2aba9ef56f139e0e40311163cc8625e9d533850c0b75dea1c7f.png

These plots illustrate important principles:

  • The more data used for training, the better the classifier will tend to perform

  • The less data used for testing, the more variable will be the testing results (but the faster testing will go)

Note that the key decision here is what value to choose for \(k\). So it makes sense to use the 33% test split, because the smaller error bars give us better confidence in our decision.

We can get a sense of why \(k\)-NN can succeed at this task by looking at the nearest neighbors of some points:

Hide code cell source
knn = KNeighborsClassifier(n_neighbors = 3)    
knn.fit(X, y)
neighbors = knn.kneighbors(X[:3,:], n_neighbors=3, return_distance=False)
Hide code cell source
plt.rc("image", cmap="binary")  # this sets a black on white colormap
# plot X_digits_valid[0]
for t in range(3):
    plt.figure(figsize=(8,2))
    plt.subplot(1, 4, 1)
    plt.imshow(X[t].reshape(8, 8))
    plt.axis('off')
    plt.title("Query")
    # plot three nearest neighbors from the training set
    for i in [0, 1, 2]:
        plt.subplot(1, 4, 2 + i)
        plt.title("neighbor {}".format(i))
        plt.imshow(X[neighbors[t, i]].reshape(8, 8))
        plt.axis('off')
_images/859c76a639c4bd1893b9a3cebe5af73990199b293a7ffa183e353d5e2d08706e.png _images/ecc4e18b72c1b9ce2e3f4e020bcba3fb9fbc6fd3f3f8fee264f1d852eca91f01.png _images/eb78ba886e4b68d43673e688481f44ce9358552ed782bd9bdb2ea699449497d8.png