Robert Holmes wrote:
> In my role as FRIAM's official Cassandra (I should get a T-shirt 
> printed), has anyone ever shown that these highly intensive 
> simulations give quantitatively better results than, say, something 
> written on Owen's laptop in NetLogo? Do we know that we get a better 
> assessment of (for example) the robustness of policies for stopping 
> epidemic spread or do we rely on the "more is better" argument? ("Of 
> course, the results are better - we have an NSF grant and 15 
> supercomputers").
In a model, each added parameter has a statistical cost that needs to be 
justified statistically in terms of how more accurate the simulation 
prediction becomes.   It happens that for agent models, there's also a 
strong practical incentive to reduce them because the sensitivity 
analysis gets so hard to do.   For example, a sensitivity analysis can 
easily involve a huge space to explore even making many simplifying 
assumptions.  If simulation only runs for 1 minute and there are 65536 
plausible simulation configurations (e.g. 16 switches to throw, 
presumably from a space of many more implausible ones), that's still 45 
days of runtime on one computer to fully explore the space and build a 
distribution of outcomes.  It is not hard to find 16 plausible 
parameters to put into a realistic model.

Rare sequences of events can be important to disease propagation and 
their consequences can depend very much on population density, 
transportation behaviors of that population, and available health 
interventions.   A modeler could describe a particular parameterization 
of a particular microcosm, but that's not probably what most planners 
are interested in, except to the extent some are just fishing around for 
a cartoon dramatization to show their boss.
Unfortunately, to observe the rare event in a realistic setting, each 
instance of a simulation can be pretty substantial, because you're 
waiting for those rare events in a changing environment.   Given a goal 
to correlate that changing environment with the rare events, it becomes 
all the harder because the frequently at which one can observe the 
co-occurring events is even more rare, thus getting a confidence 
interval on those combinations is hard.

Facts about model components can be hard to come by, independent of 
intuition of dynamics about the whole thing behaves.  You may not know 
exactly how a disease behaves in specific contexts, so you have to cover 
a range of possibility (e.g. one switch setting being the high and low 
range of what you think could happen best and worst case, assuming that 
it is linear and independent of other variables, which it may not be). 

Marcus





============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org

Reply via email to