Abstract
The support vector machine is a very promising classification technique developed by V.N. Vapnik and his colleagues in the early 90s of the twentieth century. For over two decades, this technique has been successfully applied to the construction of the identification program that is used for many different areas of the life. Since it was proposed to date, this technique was generalized continuously. The first is the problem with data sets are linearly inseparable, then to the case with overlapping data sets or more complicated when data layers are mixed together. In all cases, Lagrange multiplier method has proved effective to put the problem in the most explicit form from which some algorithms can be provided. This paper aims to present the mathematical basis of the classification technique for cases ranging from simple to complex.
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Copyright (c) 2017 Array