AI-GEOSTATS: Re: Lagrange Multiplier

2006-10-12 Thread Nicholas . Nagle
I guess I forgot to send this to the list, so my apologies to Njeri for sending
this twice...


For OK, the Lagrange multiplier is

(1-sum of simple kriging weights) inv(sum(sum(inv(C)

See Cressie, p. 123 for a start, but as I recall, Chiles and Delfiner have a
nice section on this as well.

This last term is 1 over the information matrix for estimating a mean.
It gets large with strong correlation (we can't estimate the mean as precisely
due to data redundancy)

The first term is the difference between the simple kriging weights and 1 (i.e.
how "strong" our constraint on summing to 1 is).  The SK weights tend to sum
closer to 1 if the prediction point is close to other data.  So if we predict
close to other data, the error in estimating the global mean matters less.

If we are predicting far from the data, precise estimates of the global mean are
important, if we are close to the data, not so important.

Taken together, the lagrange multiplier helps to measure the portion of our
prediction error that is due to estimating the mean in the first place.

A good description of kriging as a regression and without using multipliers
appeared some time ago (early-mid 90s?) in an article by Stein and Corsten in
JASA.

Hope this helps

Cheers,
Nicholas


Nicholas N. Nagle, Assistant Professor
University of Colorado
Department of Geography
UCB 260, Guggenheim 110
Boulder, CO 80309-0260
phone: 303-492-4794


Quoting Njeri Wabiri <[EMAIL PROTECTED]>:

> Dear list
> Just a newbabie question
> What is the statistical interpretation of the Lagrange multiplier in
> kriging.
> At least I know if its positive we have a high kriging variance and vice
> versa.
>
> Grateful for a response and a reference
>
> Njeri
>
> +
> + To post a message to the list, send it to ai-geostats@jrc.it
> + To unsubscribe, send email to majordomo@ jrc.it with no subject and
> "unsubscribe ai-geostats" in the message body. DO NOT SEND
> Subscribe/Unsubscribe requests to the list
> + As a general service to list users, please remember to post a summary of
> any useful responses to your questions.
> + Support to the forum can be found at http://www.ai-geostats.org/
>
+
+ To post a message to the list, send it to ai-geostats@jrc.it
+ To unsubscribe, send email to majordomo@ jrc.it with no subject and 
"unsubscribe ai-geostats" in the message body. DO NOT SEND 
Subscribe/Unsubscribe requests to the list
+ As a general service to list users, please remember to post a summary of any 
useful responses to your questions.
+ Support to the forum can be found at http://www.ai-geostats.org/


AI-GEOSTATS: Re: Lagrange Multiplier

2006-10-12 Thread Isobel Clark
Njeri     The full _expression_ for the estimation variance conains three terms:     1) twice the weighted average of the semi-variograms between each sample and the point to be estimated     2) the  doubly weighted average of all the semi-variograms between every possible pair of samples used in the estimation     3) if estimating over an area or volume, the average semi-variogram between every pair of points inside that area or volume     (2) and (3) can also be described as the "variance amongst the sample values" and the "within-block variance" respectively and are subtracted from (1).      When ordinary kriging is derived the lagrangian multiplier is introduced to make sure the weights add up to 1. It turns out that the lagrangian multiplier is equal to half of term (1) minus term (2). Intuitively, it
 is the balance between how well your samples relate to the unknown value and how well they relate to one another.     For example: if your samples are all close to the estimated location, term (1) will be small; if they are all close to one another term (2) will be small. Ideally we want term (1) to be as small as possible and term (2) to be as big as possible. This translates into: "lagrangian multiplier value big and positive" samples are either too far from point to be estimated or are highly clustered. "lagrangian multiplier big and negative" samples (too?) close to estimated point and widely spaced around it.      One might see a zero lagrangian multiplier as the perfect balance between the sampling layout and the prediction of unknown values. Or not, as you prefer.     Hope this helps  Isobel  http://www.kriging.comNjeri Wabiri <[EMAIL PROTECTED]> wrote:  Dear listJust a newbabie questionWhat is the statistical interpretation of the Lagrange multiplier in kriging.At least I know if its positive we have a high kriging variance and vice versa.Grateful for a response and a referenceNjeri ++ To post a message to the list, send it to ai-geostats@jrc.it+ To unsubscribe, send email to majordomo@ jrc.it with no subject and "unsubscribe ai-geostats" in the message body. DO NOT SEND Subscribe/Unsubscribe requests to the list+ As a general service to list users, please remember to post a summary of any useful responses to your questions.+ Support to the forum can be found at http://www.ai-geostats.org/

RE: AI-GEOSTATS: Re: Lagrange Multiplier

2006-10-11 Thread Ted Harding
On 11-Oct-06 Njeri Wabiri wrote:
> Dear list
> Just a newbabie question
> What is the statistical interpretation of the Lagrange
> multiplier in kriging.
> At least I know if its positive we have a high kriging
> variance and vice versa.
> 
> Grateful for a response and a reference
> 
> Njeri 

The general interpretation of a Langrange multiplier is
as follows.

In the context of an extremal problem (find the max/min)
of a function

  f(x1,x2,...,xk)

subject to a constraint

  g(x1,x2,...,xk) = c

when you express it as solving the extremal problem for

  f(x1,x2,...,xk) - L*g(x1,x2,...,xk)

(and then using the constraint equation to eliminate L),
the value of L is equal to the rate of change of the
extremal value of f (as so found) with respect to variation
in the value of c.

Otherwise put: for a particular value of c, the extremal
value of f is (say) fmax(c). Then L = dfmax(c)/dc. The
same is true with several constraints (and a corresponding
number of lagrange multipliers), in that the i-th L is
equal to the partial derivative of fmax(c1,c2,...,cr)
with respect to ci.

[This assumes that the function f has a max/min which
can be found analytically by solving the equation[s]
obtained by setting derivative[s] equal to 0. The above
is not quite so directly true when the maximum is attained
on the boundary of the region defined by the constraints,
as in a Linear Programming problem, for instance; though
something similar is also true there.]

I can supply a demonstration of the above, if requested.

So, if (in a statistical context) a sum of squares if
minimised, or a likelihood is maximised, subject to constraints,
then the above applies to the sum of squares, or likelihood,
or whatever.

Hoping this helps,
Ted.


E-Mail: (Ted Harding) <[EMAIL PROTECTED]>
Fax-to-email: +44 (0)870 094 0861
Date: 11-Oct-06   Time: 21:27:33
-- XFMail --
+
+ To post a message to the list, send it to ai-geostats@jrc.it
+ To unsubscribe, send email to majordomo@ jrc.it with no subject and 
"unsubscribe ai-geostats" in the message body. DO NOT SEND 
Subscribe/Unsubscribe requests to the list
+ As a general service to list users, please remember to post a summary of any 
useful responses to your questions.
+ Support to the forum can be found at http://www.ai-geostats.org/


AI-GEOSTATS: Re: Lagrange Multiplier

2006-10-11 Thread Njeri Wabiri

Dear list
Just a newbabie question
What is the statistical interpretation of the Lagrange multiplier in 
kriging.
At least I know if its positive we have a high kriging variance and vice 
versa.


Grateful for a response and a reference

Njeri 


+
+ To post a message to the list, send it to ai-geostats@jrc.it
+ To unsubscribe, send email to majordomo@ jrc.it with no subject and "unsubscribe 
ai-geostats" in the message body. DO NOT SEND Subscribe/Unsubscribe requests to the 
list
+ As a general service to list users, please remember to post a summary of any 
useful responses to your questions.
+ Support to the forum can be found at http://www.ai-geostats.org/