Vimal wrote:
> Hi,
>
> Can someone explain or give some urls on 'difference' between
> expectation-maximization and entropy maximization.
>
> To me both seems to maximize the E(log(p(x))) where p(x) is the pdf,
> although both originate from different theories.
>
> Thanks,
>
> Cheers,
> Vimal

I think what you say is not quite right...

... let p(x) be the true pdf and q(x) a candidate for what might be
the true pdf. Then "expectation-maximization" seeks to maximise
E(log(q(x))) (where E derives from p(x), usually when p(x) is unknown
but information derives from a data sample, but p(x) is treated as
fixed, while q(x) is variable).

... and, "entropy maximization" seeks to maximise E(log(p(x)) (where E
derives from p(x), and both instances of p(x) are the same and
variable).

David Jones


.
.
=================================================================
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at:
.                  http://jse.stat.ncsu.edu/                    .
=================================================================

Reply via email to