Machine Learning with Swift
上QQ阅读APP看书,第一时间看更新

Calculating the distance

How do we calculate a distance? Well, that depends on the kind of problem. In two-dimensional space, we used to calculate the distance between two points, (x1, y1) and (x2, y2), as —the Euclidean distance. But this is not how taxi drivers calculate distance because in the city you can't cut corners and go straight to your goal. So, they use (knowing it or not) another distance metric: Manhattan distance or taxicab distance, also known as l1-norm: . This is the distance if we're only allowed to move along coordinate axes:

Figure 3.1: The blue line represents the Euclidean distance, the red line represents the Manhattan distance. Map of Manhattan by OpenStreetMap

Jewish German mathematician Hermann Minkowski proposed a generalization of both Euclidean and Manhattan distances. Here is the formula for the Minkowski distance:

where p and q are n-dimensional vectors (or coordinates of points in n-dimensional space if you wish). But what does c stand for? It is an order of the Minkowsi distance: under the c = 1, it gives an equation of Manhattan distance, and under c = 2 it gives Euclidean distance.

Vector operations, including the calculation of Manhattan and Euclidean distances, can be parallelized for efficiency. Apple's Accelerate framework provides APIs for fast vector and matrix computations.

In machine learning, we generalize the notion of distance to any kind of objects for which we can calculate how similar they are, using a function: distance metric. In this way, we can define the distance between two pieces of text, two pictures, or two audio signals. Let's take a look at two examples.

When you deal with two pieces of text of equal length, you use edit distance; for example, Hamming distance—the minimum number of substitutions needed to transform one string into another. To calculate the edit distance, we use dynamic programming, an iterative approach where the problem is broken into small subproblems, and the result of each step is remembered for future computations. Edit distance is an important measure in applications that deal with text revisions; for example, in bioinformatics (see the following diagram):

Figure 3.2: Four pieces of DNA from different species aligned together: modern human, neanderthal, gorilla, and cat. The Hamming edit distance from modern human to others is 1, 5, and 11 respectively.

Often, we store different signals (audio, motion data, and so on) as arrays of numbers. How do we measure the similarity of such two arrays? We use the combination of Euclidean distance and edit distance, called DTW.