Thanks Aleks for correction
ya, I also found out that it should be entropy minimization, 

About entropy maximization, 
> Maximum entropy principle is about the idea that given a number of
> equivalent prospective models, you should pick the one with the
> highest entropy.
I guess EM also tries to do the same, ie tries to maximize the
likelihood of incomplete pdf match the complete pdf iteratively for a
said parameter.

Can you please elaborate the difference. I am a newbie to Information
Theoritic approaches.

"Aleks Jakulin" <jakulin@@ieee.org> wrote in message news:<[EMAIL PROTECTED]>...
> "Vimal" wrote:
> > Can someone explain or give some urls on 'difference' between
> > expectation-maximization and entropy maximization.
> >
> > To me both seems to maximize the E(log(p(x))) where p(x) is the pdf,
> > although both originate from different theories.
> 
> Watch out. Maximizing log-likelihood (and EM is a particular approach
> to this) is similar to *minimizing* the cross or relative entropy
> (Kullback-Leibler divergence).
> 
> Maximum entropy principle is about the idea that given a number of
> equivalent prospective models, you should pick the one with the
> highest entropy.
.
.
=================================================================
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at:
.                  http://jse.stat.ncsu.edu/                    .
=================================================================

Reply via email to