A theory of learning is proposed,which extends naturally the classic regularization framework of kernelmachines to the case in which the agent interacts with a richer environment, compactly described by the notion of constraint. Variational calculus is exploited to derive general representer theorems that give a description of the structure of the solution to the learning problem.
It is shown that such solution can be represented in terms of constraint reactions, which remind the corresponding notion in analytic mechanics. In particular, the derived representer theorems clearly show the extension of the classic kernel expansion on support vectors to the expansion on support constraints. As an application of the proposed theory three examples are given, which illustrate the dimensional collapse to a finite-dimensional space of parameters.
The constraint reactions are calculated for the classic collection of supervised examples, for the case of box constraints, and for the case of hard holonomic linear constraints mixed with supervised examples. Interestingly, this leads to representer theorems for which we can re-use the kernel machine mathematical and algorithmic apparatus.
Autori: Giorgio Gnecco, Marco Gori, Stefano Melacci, Marcello Sanguineti
Titolo del libro: Artificial Neural Networks