I think that Dmitri overstates his case a bit.

This multiplication in observation space works for some algorithms, not for
others.  Ordinary least squares regression is somewhat of an exception
here.  Logistic regression is a simple counter-example.

It is still useful to have a vector weight and it helps users.  It may be
useful in some situations to also all a full correlation matrix, but I
haven't had a need for that yet.

On Sun, Feb 22, 2009 at 11:24 AM, Dimitri Pourbaix <pourb...@astro.ulb.ac.be
> wrote:

> Either one considers the full weighting matrix (including potential
> correlation between observations) or one does not account for any weight
> at all.  By premultiplying both the function matrix and the observation
> vector  by the square root of the weight matrix, one can forget about it
> completely in the rest of the computation.
>



-- 
Ted Dunning, CTO
DeepDyve

Reply via email to