Vimal wrote:
> Thanks David for explanation,
>
> Can I say that:
> 1) For EM, E-step: \int p(x) log(q(x)) dx
> and in
> Maximization wrt some parameter, we can take p(x) as fixed ie
> M : \int p(x) (log(q(x))' dx
>

The earlier discussion was really just for "ordinary" maximum
likelihood ... if you want to think about the EM algorithm
specifically, then I think you need to extend the notation to
explicitly distinguish between the observed and "missing" data, since
the "expectation" step applies only to the missing part of the data.

> and
>
> 2) Entropy = - \int p(x) log(p(x)) dx
> and in
> maximization, we use differentiation (of multiples rule) for some
> parameter, we get M : -\int p(x)' log(p(x)) dx
>
> Cheers,
> Vimal


.
.
=================================================================
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at:
.                  http://jse.stat.ncsu.edu/                    .
=================================================================

Reply via email to