Kernels and Quantum Gravity Part 3: Coherent States

This would not be a pedagogic machine learning blog if I did not go into some overly abstract formalism…here we introduce the Kernel formalism using the languages of Coherent States: We define a space of labels \mathcal{L} (that is isomorphic to \mathcal{\mathbb{R}}^{n} , or, more generally, just a locally compact space), and an abstract Hilbert space \mathcal{H}  We seek a map between the two: \mathcal{L} \rightarrow \mathcal{H} such that

kernels overs phase space

  1. The map is jointly (and weakly) continuous in (l,l') :  l \rightarrow l' \Longrightarrow |l \rangle \rightarrow | l' \rangle
  2. We can define a positive measure on \mathcal{H} , \mu(l),  that relates back to our measure on \mathcal{L} , such that d\mu(l) = \rho(l)d^{n}l

Note that

  • if the labels are non-zero, the associated density (in  \mathcal{\mathbb{R}}^{n} )  is non-zero: | l \rangle \neq 0, \rho(l) \ge 0
  • We can normalize or label states as \langle l | l \rangle = \Vert l^{2} \Vert = 1
  • define the resolution of identify on \mathcal{H} with this measure as: I=\int d\mu(l)|l\rangle\langle l|

In Machine Learning, we specify a Reproducing Kernel Hilbert Space (RKHS) , which is weaker than specifying a full Resolution of the Identity operator.  Furthermore, to be really technical, the RKHS we speak of in machine learning are , in fact, a subset of all RKHS that have continuous  (and even analytic) Kernels.  Let’s compare:

Resolution of Identity

We may express function | \psi \rangle with an continuous basis set expansion:

|\psi\rangle=I|\psi\rangle=\int d\mu(l)|l\rangle\langle l|\psi\rangle

where we identify the continuous function \Psi(l)=\langle l|\psi\rangle

Reproducing Kernel Hilbert Spaces

You may recall that Dirac invented his notation , the Dirac delta-function, and many other constructs without having a purely rigorous mathematical formulation. As such, I think the machine learning community is trying to start off as rigorously correct as they can.

So rather than introduce an expression for I , we introduce a operator K(l,l') acting on the Hilbert space \mathcal{H} that encapsulates the fact that our basis set is non-orthogonal (and perhaps overcomplete?)

K(l,l')=\langle l|l'\rangle\neq\delta_{l,l'}

Our “basis set” expansion for |\Psi\rangle is now in terms of a function $latex K(l,l’) $ that a computer scientist might say is bound in the variable l  and unbound in the variable l'

\int_{H}d\mu(l)K(l,l')\Psi(l)=\Psi(l')

We have some conditions on K(l,l') .  It is both Symmetric or Hermetian K(l,l')=K^{*}(l',l)  and positive definite K(l,l)=>0 , and it can ‘reproduce itself’

\int_{H}d\mu(l)K(x,y)K(y,z)=K(x,z)

In physics, we typically see these Kernels arising in Integral equations and  Inhomogenous differential equations, and as Propagators, Green’s functions, Resolvent Operators,   Effective Hamiltonians, etc.  They are everywhere!    The main difference is that in other fields, one (usually) tries to use their prior knowledge of the problem to actually find the solution and does not just guess random Kernels and crossvalidate (although there are important cases where it does seem like this, such as in Quantum Chemical Density Functional Theory).

What’s the difference? 

In the simplest sense, the Kernel takes the place of a Dirac delta function when the basis functions are non-orthogonal.  But we handle this by changing the measure on the Resolution of the Identity operator.  No sweat.

It is, of course, nice to have a solid, rigorous foundation, but the notation can get cumbersome when we are confined by it to  handle special cases that never occur.  We tell students that we can do not need to guarantee that the Kernel operator factors, such that

K(l,l')=G^{*}(l)G(l')

Usually we see this factorization in terms of the inverse Kernel and the Regularization Operator \Gamma

K^{-1}(l,l')=\Gamma^{*}(l)\Gamma(l')

[A related mathematical problem arises in the construction of the Effective Operators in Quantum Chemistry and Physics.  Namely, when can an effective operator A^{eff}, which , we will see later, is a special kind of Kernel,  be decomposed as A^{eff} = \Gamma^{t}\Gamma ?  We will discuss this in a later blog]

This is quite confusing to the uninitiated or those lacking significant mathematically training , because it makes it seem that the Kernels are random and without physical intuition and insight.

This leads me to ask a question:  Are there any useful Kernel in Machine Learning that can not be represented using the Resolution of Identity (or, equivalently, with the associated Regularization Operator) ?  

I would accept this recent paper on Reproducing Kernel Banach Spaces with the ℓ1 Norm

Physics of Coherent States

In Physics, we may think of the labels as the Classical variables of phase space (p,q) and the Hilbert space \mathcal{H} as   the space of Quantum Mechanical wavefunctions |p,q \rangle .  The map is the geometric foundation, the fabric if you will, for our  connection between Classical and Quantum Physics. To understand this better, we will look at some explicit constructions of this map using both our explicit series expansion construction of Coherent States , Wavelets, as well as using Group Theoretic methods.  At some point (maybe part 17 of this blog) I will try to get  Coherent states that appear in Affine Quantum Gravity, in Loop Quantum Gravity, and in String Theory.  More importantly, for understanding machine learning, we will see the mathematical formulation of Frame Quantization and the attempts to capture the mathematics of coherent states under a single mathematical formalism (and how and when this is doable)  .   Please stay tuned , ask questions, and post comments.

2 Comments

    1. Thanks man!

      It is hard to write pedagogically. Plus the WordPress Latex interface is great to have although it gets confused a lot (a lot!). If I can get WordPress to behave I will produce some new ideas too 😛

      Like

Leave a comment