The Minkowski distance is a generalization of the Manhattan and Euclidean distances that adds a parameter p called order. When the order is one, the Minkowski distance equals the Manhattan distance and, when the order is 2, it equals the Euclidean distance. In the limiting case where the order is infinite, the Minkowski distance equals the Chebyshev distance.
Given two points p and q with coordinates (p1, p2, … , pn) and (q1, q2, … , qn) and an order p, the Minkowski distance between both vectors are defined as:
Where |x| represents the norm of vector x.
d(p,q) > 0 if p ≠q and d(p,p) = 0.
d(p,q) = d(q,p).
d(p,q) ≤ d(p,r) + d(r,q).
A distance that satisfies these properties is a metric. Thus, the Minkowski distance is a metric for n ≥ 1, but not when n <1 because the triangle inequality is not valid in this case.
In machine learning, it is widespread to use the Minkowski distance in those algorithms that make use of this concept to find the optimal correlation or to classify data. This is because it is straightforward to consider the order as one or two and thus, use the Manhattan or the Euclidean distance, according to the different needs. Some algorithms that consider distance in their calculations are K-Nearest Neighbors, Learning Vector Quantization (LVQ), Self-Organizing Map (SOM), and K-Means Clustering.
LogicPlum’s platform is an automated tool for machine learning. As such, it uses algorithms, including the K-Nearest Neighbors, Learning Vector Quantization (LVQ), Self-Organizing Map (SOM), and K-Means Clustering.
Moreover, the main advantages of this platform are speed and accuracy. As the training and testing of the different algorithms are done automatically, the platform can check different distance definitions in a short time and, after testing them, choose the most efficient solution.
© 2021 LogicPlum, Inc. All rights reserved.