Hi,

I am writing a computer simulation, and I really would appreciate some
advice about statistics side of things!

Each simulation run has fixed settings, but there is some randomness
involved (e.g. start position). As a result, each simulation scenario
needs to be run until the universal mean (say time taken for objective
to be met) varience is reduced.

The simulation has just one output that needs measurement - time
taken, and there is no transient state.

The question is, what accuracy is acceptable, and how can I guartee
that the varience is small enough to be accurate, while being
efficient on computing power. Any methods, techniques etc. gladly
welcome, as I am new to stats!!

Thanks.


=================================================================
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at
                  http://jse.stat.ncsu.edu/
=================================================================

Reply via email to