The parameters of an univariate Gaussian distribution are $\mu$, its mean, and $\sigma^2$, its variance.
It can also be parameterised by its precision $\lambda = \frac{1}{\sigma^2}$.
Let $(x_n)$ be a set of observed realisations from a Normal distribution.
$\hat{\mu} \mid (x_n)$ | $= \overline{x} = \frac{1}{N}\sum_{n=1}^N x_n$ | $\sim \mathcal{N}\left(\mu, \frac{\sigma^2}{N}\right)$ |
$\hat{\sigma}^2 \mid (x_n)$ | $= \overline{x^2} - \overline{x}^2 = \frac{1}{N}\sum_{n=1}^N (x_n - \overline{x})^2$ | $\sim \frac{\sigma^2}{N} \chi^2\left(N-1\right)$ |
$\hat{\sigma}^2 \mid \mu, (x_n)$ | $= \overline{x^2} + \mu(\mu - 2\overline{x}) = \frac{1}{N}\sum_{n=1}^N (x_n - \mu)^2$ |
I need to check $\hat{\sigma}^2 \mid \mu, (x_n)$
We list here the distributions that can be used as conjugate prior for the parameters of an univariate Normal distribution:
$\mu \mid \lambda$ | Univariate Normal | $\mathcal{N}_\lambda$ |
$\lambda \mid \mu$ | Gamma | $\mathcal{G}_\mathcal{N}$ |
$\sigma^2 \mid \mu$ | $\mathrm{Inv-}\mathcal{G}_\mathcal{N}$ | |
$\mu, \lambda$ | $\mathcal{N}\mathcal{G}$ | |
$\mu, \sigma^2$ | $\mathcal{N}\mathrm{Inv-}\mathcal{G}$ |
Update equations can be found in the Conjugate prior article.
The KL-divergence can be written as
where $H$ is the cross-entropy. We have
Consequently
Or, if a parameterisation based on the precision is used,
When the Normal distribution is used as a conjugate prior for the mean of another Normal distribution with known precision $\lambda$, it makes sense to parameterise it in terms of its expected value, $\mu_0$, and degrees of freedom, $n_0$:
The KL-divergence can be written as
Created by Yaël Balbastre on 6 April 2018. Last edited on 6 April 2018.