Coming from Physics / Physical Chemistry, where Hilbert Spaces and Green’s Functions are everywhere, and the notation is just Dirac Notation, it is sometimes hard to understand why practitioners of Machine Learning insist on using the esoteric formalism of Reproducing Kernel Hilbert Spaces (RKHS) to describe well known things such as matrices, basis sets, and integral transforms. What is even more perplexing is to use RKHS to describe essentially finite Kernels, such as String Kernels, Graph Kernels, etc.
In my last post, I showed how to describe the standard Radial Basis Function (RBF) Kernel in terms of something very well known–a Low Band Pass Filter–using something called a Regularization Operator. Naturally, we can represent other well known Kernels, such as Polynomial and Sigmoid kernels, using similar methods.
So Where do these RKHS come from? Are they really that different from the Hilbert Spaces of Quantum Mechanics? Do we really need this formalism to solve simple problems?
I will approach this problem by looking at a much harder problem–Quantum Gravity (specifically, Klauder’s formulation of Affine Quantum Gravity), and some of the constructs that have come about to describe this problem. In this, we will see many mathematical constructs familiar to Machine Learning, including Constrained Optimization, Lagrange Multipliers, RKHS, and even Regularization Parameters
We will learn about a familiar engineering too–the Wavelet Transform.
And we will, in particular, learn about the most common math that really calls for RKHS–Coherent States
I will even introduce a sample Coherent State Kernel using , as a basis, Google itself.
Stay tuned…
1 Comment