Re: AI-GEOSTATS: kriging variance and accuracy

2006-10-12 Thread tom andrews
  Dear ListLet us consider theoretical model of stationary random function   with some correlation function.  Model and correlation function are out of any doubts.  First generation of outcome values does not have to cover  the second and go on.  It means that "true" outcome value from first generation   at some coordinate does not have to cover the "true" outcome  value from second generation at the same coordinate and go on.In practice we see and try to estimate only one generation. In model point of view we should rather to express  kriging variance by  a) var( estimate(x)-Z'(x) ) where Z' can generate all outcome values   of random variable at some coordinate x   not by  b) var( estimate(x)-Z(x) ) where Z generates only one
 "true" outcome value  at some coordinate xI see that 1.96 catches 95% of probability in case b but not in a   (except mean estimation in case a).My thesis (right side of kriging variance does not lie):a) In statistics we have the variance E{ [E{V}-V ]^2 } where E{V} isexpected VALUE and V is random VARIABLE b) Kriging variance in fact is E{ [S{V}-V]^2 } where S{V} is spread VALUEand V is random VARIABLEc) IF S{V} =E{V} then E{ [S{V}-V]^2 } =E{ [E{V}-V]^2 } = sigma^2IF S{V}  E{V} then E{ [S{V}-V]^2 }  E{ [E{V}-V]^2 } = sigma^2 d) We should forget about kriging variance in the case of interpolation  
   e) We should analyze our interpolation results out of area of interest(to not meet known values with default zero value of kriging variance).   If kriging variance is equal to sigma^2 (sigma^2is a multiplier in the   term of correlation function so we can only analyze variance ratio)then we know mean value of V.If kriging variance tends to zero it means that predicted value   "disappears" in the tail of distribution of random variable V. All considerations follow ordinary kriging variance.P.S.  Gaussian noise is unpredictable (spread value is unpredictable).   Has only mean and variance. Gaussian noise in linear drift also has  only mean and variance cause detrended gaussian noise in drift
 has only   constant mean and variance. Best Regards   tom 
		Talk is cheap. Use Yahoo! Messenger to make PC-to-Phone calls.  Great rates starting at 1¢/min.

AI-GEOSTATS: Re: Lagrange Multiplier

2006-10-12 Thread Isobel Clark
NjeriThe full _expression_ for the estimation variance conains three terms:1) twice the weighted average of the semi-variograms between each sample and the point to be estimated2) the doubly weighted average of all the semi-variograms between every possible pair of samples used in the estimation3) if estimating over an area or volume, the average semi-variogram between every pair of points inside that area or volume(2) and (3) can also be described as the "variance amongst the sample values" and the "within-block variance" respectively and are subtracted from (1). When ordinary kriging is derived the lagrangian multiplier is introduced to make sure the weights add up to 1. It turns out that the lagrangian multiplier is equal to half of term (1) minus term (2). Intuitively, it
 is the balance between how well your samples relate to the unknown value and how well they relate to one another.For example: if your samples are all close to the estimated location, term (1) will be small; if they are all close to one another term (2) will be small. Ideally we want term (1) to be as small as possible and term (2) to be as big as possible. This translates into: "lagrangian multiplier value big and positive" samples are either too far from point to be estimated or are highly clustered. "lagrangian multiplier big and negative" samples (too?) close to estimated point and widely spaced around it. One might see a zero lagrangian multiplier as the perfect balance between the sampling layout and the prediction of unknown values. Or not, as you prefer.Hope this helps  Isobel  http://www.kriging.comNjeri Wabiri [EMAIL PROTECTED] wrote:  Dear listJust a newbabie questionWhat is the statistical interpretation of the Lagrange multiplier in kriging.At least I know if its positive we have a high kriging variance and vice versa.Grateful for a response and a referenceNjeri ++ To post a message to the list, send it to ai-geostats@jrc.it+ To unsubscribe, send email to majordomo@ jrc.it with no subject and "unsubscribe ai-geostats" in the message body. DO NOT SEND Subscribe/Unsubscribe requests to the list+ As a general service to list users, please remember to post a summary of any useful responses to your questions.+ Support to the forum can be found at http://www.ai-geostats.org/

AI-GEOSTATS: Re: Lagrange Multiplier

2006-10-12 Thread Nicholas . Nagle
I guess I forgot to send this to the list, so my apologies to Njeri for sending
this twice...


For OK, the Lagrange multiplier is

(1-sum of simple kriging weights) inv(sum(sum(inv(C)

See Cressie, p. 123 for a start, but as I recall, Chiles and Delfiner have a
nice section on this as well.

This last term is 1 over the information matrix for estimating a mean.
It gets large with strong correlation (we can't estimate the mean as precisely
due to data redundancy)

The first term is the difference between the simple kriging weights and 1 (i.e.
how strong our constraint on summing to 1 is).  The SK weights tend to sum
closer to 1 if the prediction point is close to other data.  So if we predict
close to other data, the error in estimating the global mean matters less.

If we are predicting far from the data, precise estimates of the global mean are
important, if we are close to the data, not so important.

Taken together, the lagrange multiplier helps to measure the portion of our
prediction error that is due to estimating the mean in the first place.

A good description of kriging as a regression and without using multipliers
appeared some time ago (early-mid 90s?) in an article by Stein and Corsten in
JASA.

Hope this helps

Cheers,
Nicholas


Nicholas N. Nagle, Assistant Professor
University of Colorado
Department of Geography
UCB 260, Guggenheim 110
Boulder, CO 80309-0260
phone: 303-492-4794


Quoting Njeri Wabiri [EMAIL PROTECTED]:

 Dear list
 Just a newbabie question
 What is the statistical interpretation of the Lagrange multiplier in
 kriging.
 At least I know if its positive we have a high kriging variance and vice
 versa.

 Grateful for a response and a reference

 Njeri

 +
 + To post a message to the list, send it to ai-geostats@jrc.it
 + To unsubscribe, send email to majordomo@ jrc.it with no subject and
 unsubscribe ai-geostats in the message body. DO NOT SEND
 Subscribe/Unsubscribe requests to the list
 + As a general service to list users, please remember to post a summary of
 any useful responses to your questions.
 + Support to the forum can be found at http://www.ai-geostats.org/

+
+ To post a message to the list, send it to ai-geostats@jrc.it
+ To unsubscribe, send email to majordomo@ jrc.it with no subject and 
unsubscribe ai-geostats in the message body. DO NOT SEND 
Subscribe/Unsubscribe requests to the list
+ As a general service to list users, please remember to post a summary of any 
useful responses to your questions.
+ Support to the forum can be found at http://www.ai-geostats.org/


Re: AI-GEOSTATS: kriging variance and accuracy

2006-10-12 Thread Gerald van den Boogaart
Dear Tom Andrews,

I have to object to your last mail.

In the rare case, where we would like to predict the local value at some 
location on a second earth from observations of a first earth,  we should 
follow your suggestions. 

However, when we are in the situation of kriging, we typically do not assume 
that we change the local geology between measurement and prediction and thus 
the important difference is not the difference of the kriging predictor from 
the expectation or of the difference of expectation and realized value, but 
the difference of predicted and realized value. 

I added some  inline comments to the mail also.

Best regards,
Gerald v.d. Boogaart


Am Donnerstag, 12. Oktober 2006 11:51 schrieb tom andrews:
   Dear List

   Let us consider theoretical model of stationary random function
   with some correlation function.
   Model and correlation function are out of any doubts.
   First generation of outcome values does not have to cover
   the second and go on.
   It means that true outcome value from first generation
   at some coordinate does not have to cover the true outcome
   value from second generation at the same coordinate and go on.

   In practice we see and try to estimate only one generation.


So this is what kriging and geostatistics in general tries to do. If you would 
like to predict another generation you either need space-time kriging, where 
you instruct kriging to estimate the next generation. But spatial kriging is 
not applicable in this situation.  

   In model point of view we should rather to express
   kriging variance by
   a) var( estimate(x)-Z'(x) ) where Z' can generate all outcome values
   of random variable at some coordinate x
   not by
   b) var( estimate(x)-Z(x) ) where Z generates only one true outcome
 value at some coordinate x

You say should express kriging variance. However that would mean changing 
definitions. Call this object an Andrews variance and we can discusse the 
usefulness of Andrews variance. 


   I see that 1.96 catches 95% of probability in case b but not in a
   (except mean estimation in case a).

It catches 95% of the marginal probability and 95% of the next generation. 
It catches more than 95% on the same generation. However for the next 
generation you can have by far shorter prediction intervals using the adequat 
method (e.g. estimation of mean and variance, rather than kriging) 


   My thesis (right side of kriging variance does not lie):

   a) In statistics we have the variance E{ [E{V}-V ]^2 }  where E{V} is
   expected VALUE and V is random VARIABLE

   b) Kriging variance in fact is E{ [S{V}-V]^2 } where S{V} is spread VALUE
   and V is random VARIABLE

S(V) is the prediction of the random variable, not a spread value, but in it 
self random.


   c) IF S{V}  =  E{V} then E{ [S{V}-V]^2 }  =  E{ [E{V}-V]^2 } = sigma^2
   IF S{V}  E{V} then E{ [S{V}-V]^2 }  E{ [E{V}-V]^2 } = sigma^2

Typically the kriging variance is smaller than sigma^2. Which is a good 
feature not a bad one. 

   d) We should forget about kriging variance in the case of interpolation

No. According to your argument, we must forget about kriging variances in case 
of predicting a second earth,


   e) We should analyze our interpolation results out of area of interest
  (to not meet known values with default zero value of kriging
 variance). If kriging variance is equal to sigma^2 (sigma^2  is a
 multiplier in the term of correlation function so we can only analyze
 variance ratio) then we know mean value of V.

This is a way to predict the mean of the field.
  If kriging variance tends to zero it means that predicted value
  disappears in the tail of distribution of random variable V.

Not really. 

   All considerations follow ordinary kriging variance.

   P.S.
   Gaussian noise is unpredictable (spread value is unpredictable).
   Has only mean and variance. Gaussian noise in linear drift also has
   only mean and variance cause detrended gaussian noise in drift has only
   constant mean and variance.

You forgot the word white here. Gaussian white noise is unpredictable. 

Best regards,
Gerald 

   Best Regards
 tom


 -
 Talk is cheap. Use Yahoo! Messenger to make PC-to-Phone calls.  Great rates
 starting at 1¢/min.


+
+ To post a message to the list, send it to ai-geostats@jrc.it
+ To unsubscribe, send email to majordomo@ jrc.it with no subject and 
unsubscribe ai-geostats in the message body. DO NOT SEND 
Subscribe/Unsubscribe requests to the list
+ As a general service to list users, please remember to post a summary of any 
useful responses to your questions.
+ Support to the forum can be found at http://www.ai-geostats.org/