In a 100-user test, the Flood1.1 executable under Windows consistently
grows in the amount of memory usage it maintains, even while the number of
threads is slowly diminishing (at the tail end of the test).
             While running a "Flood clone" which, while
using the pools-based Flood memory model, and also generating both more
data per url as well as data which must be maintained per page and
session, we find that under heavy load and/or long time durations we are
seeing (1) segmentation faults, (2) cases where users seem to be hanging
around without generating any data, and (3) that the data (timings, in
particular) itself seems to be suspect when we are in the process of
"hitting the rail". I was wondering whether (1) other people have seen
this issue with this or other applications using apr pools, and (2)
whether there is any "quick" fix that people can see to remedy this
problem. I understand that there is still work to be done to Flood to
generate pools at lower levels, but the consummate behavior of just
allocating memory when you need it without cleaning up (since you can
wait for when the pools will be cleaned up at a higher stage) is a
practice bordering on disaster. I was also wondering whether the APR
(current or from about a year a ago) has been tuned to prevent memory
leaks or whether our design currently enforces this. I suspect that not
much research has been done here. The goal of this email is not to knock
the current development on Flood but is asking for help in resolving an
issue we are facing. If we can overcome ours, we can also help Flood
overcome its own potential issues as well.

I appreciate any responses.

-Norman Tuttle, software developer, OpenDemand Systems
[EMAIL PROTECTED]

Reply via email to