Greetings,

I'm involved in a research project which measures the load on a
(computer) network. The reponse variable is the cumulative byte count,
which is measured at various times (which are determined by an
adaptive sampling technique).

The measurements taken at these times are assumed to be accurate, so I
am using the following technique to judge the accuracy of the
sampling:

Assuming we measure the cumulative byte count after 10s and 20s, and
record 100kb, and 200kb respectively....

1. Linearly interpolate between these 2 points to get

11s - 110kb
12s - 120kb
...

2. Calculate the difference between these interpolated values and the
actual values at 11s,12s,...

3. Use RMSE, SSE, or similar to get an overall measure of error


The obvious question is "How do you know the actual value is at 11s,
12,...?"
The answer is that I am using an off-line data set, rather than doing
the experiment in real-time to test the sampling algorithm.

Anyway, my question is: how valid is this method of assessing the
accuracy of the sampling technique given that there is no estimate of
"pure error" at the sample points?

Thanks in Advance,
D�nal
.
.
=================================================================
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at:
.                  http://jse.stat.ncsu.edu/                    .
=================================================================

Reply via email to