Hi everyone,

A simple question:
I have 1 set of 2D location points A that I use as reference.
I have another set of location points B generated by observations.

Is there any standard method/measure to estimate a kind of position
accuracy error knowing that
- A and B dont have the same cardinality of elements e.g. B could have
more points than A?
- a point in A should be associated to only one point in B.

For the moment I created my own error measure using 3 estimations.
for a given accuracy rate (<20 meters) I compute:
- O: number of omissions (when there is no observation in B closed
enough of a point in A) ,
- FP: number of false positive (when a B point has been observed but
not closed to a A point - or already taken from another
observation)
- M: number of matching (when a B point is closed enought of a A point)
and then I aggregate the result   = M- (O+FP) to get an indicator..

I am pretty sure there are other more traditional ways to do that.

Thanks in advance
-NM
+
+ To post a message to the list, send it to ai-geost...@jrc.ec.europa.eu
+ To unsubscribe, send email to majordomo@ jrc.ec.europa.eu with no subject and 
"unsubscribe ai-geostats" in the message body. DO NOT SEND 
Subscribe/Unsubscribe requests to the list
+ As a general service to list users, please remember to post a summary of any 
useful responses to your questions.
+ Support to the forum can be found at http://www.ai-geostats.org/

Reply via email to