Add braket operator.
`braket(f, g)` is equivalent to `o.integrate(o.numpy.conj(f) * g)`.
However, the former is preferred because currently if we use functional gradient
python
jax.grad(lambda f: o.integrate(o.numpy.conj(f), f))
It somehow results in redundant computation paths that is not optimised away by XLA.