Greetings,

I want to compare the performance of two samplers, A and B. The
samplers both take different numbers of samples and are evaluated by
linearly interpolating between the sample points.

For example, if we sample at time t, and record 10, at time t+2, and
record 20, this implies a value of 15 at t+1. If the actual value at
t+1 was 17, the error at t+1 is 2. The sum of squared errors is
calculated for each sampler (over time period t=1 to t=n) along with
the number of samples taken.

Thus, if A samples n_a times, SSE for A is based on (n - n_a) error
measurements. Similarly, if B samples n_b times SSE for B is based on
(n - n_b) error measurements. Can I use a simple comparison of
variances to compare A and B? If so, what exactly is the formula I
should use, and what distribution can I compare the result to in order
to determine whether there is a significant difference between them?

Thanks in advance,
Don
.
.
=================================================================
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at:
.                  http://jse.stat.ncsu.edu/                    .
=================================================================

Reply via email to