Re: Dangerous Flood memory model compromises larger runs
On Thu, 13 Nov 2003, Norman Tuttle wrote: > How do the pools define "if possible" in your wording below (i.e., how > would the pool know when to reuse memory)? It's kind of complicated, so I don't know how well I can explain it off the top of my head (Sander, feel free to jump in here :), but it keeps freelist buckets of varying power-of-two sizes, and if it finds one of the appropriate size, it will use it. But there are two levels of things going on, too, because the allocator hands out blocks of a certain size, which the pools then divide up into smaller blocks... Oy. Sander? Help me out. :)
Re: Dangerous Flood memory model compromises larger runs
How do the pools define "if possible" in your wording below (i.e., how would the pool know when to reuse memory)? -Norman Tuttle On Thu, 13 Nov 2003, Cliff Woolley wrote: > On Thu, 13 Nov 2003, Norman Tuttle wrote: > > > around without generating any data, and (3) that the data (timings, in > > particular) itself seems to be suspect when we are in the process of > > "hitting the rail". I was wondering whether (1) other people have seen > > this issue with this or other applications using apr pools, and (2) > > whether there is any "quick" fix that people can see to remedy this > > problem. I understand that there is still work to be done to Flood to > > APR pools allocate but do not automatically deallocate. They hang onto > the memory they have and reuse it later if possible. If you want to set a > limit on the amount the pools' underlying allocator will hang onto, use > apr_allocator_create(), call apr_allocator_set_max_free() or whatever it's > called, and then use apr_pool_create_ex() to create the pool with that > "limited" allocator. > > Have a look at the prefork or worker MPMs from httpd for an example. > > --Cliff >
Re: Dangerous Flood memory model compromises larger runs
On Thu, 13 Nov 2003, Norman Tuttle wrote: > around without generating any data, and (3) that the data (timings, in > particular) itself seems to be suspect when we are in the process of > "hitting the rail". I was wondering whether (1) other people have seen > this issue with this or other applications using apr pools, and (2) > whether there is any "quick" fix that people can see to remedy this > problem. I understand that there is still work to be done to Flood to APR pools allocate but do not automatically deallocate. They hang onto the memory they have and reuse it later if possible. If you want to set a limit on the amount the pools' underlying allocator will hang onto, use apr_allocator_create(), call apr_allocator_set_max_free() or whatever it's called, and then use apr_pool_create_ex() to create the pool with that "limited" allocator. Have a look at the prefork or worker MPMs from httpd for an example. --Cliff
Dangerous Flood memory model compromises larger runs
In a 100-user test, the Flood1.1 executable under Windows consistently grows in the amount of memory usage it maintains, even while the number of threads is slowly diminishing (at the tail end of the test). While running a "Flood clone" which, while using the pools-based Flood memory model, and also generating both more data per url as well as data which must be maintained per page and session, we find that under heavy load and/or long time durations we are seeing (1) segmentation faults, (2) cases where users seem to be hanging around without generating any data, and (3) that the data (timings, in particular) itself seems to be suspect when we are in the process of "hitting the rail". I was wondering whether (1) other people have seen this issue with this or other applications using apr pools, and (2) whether there is any "quick" fix that people can see to remedy this problem. I understand that there is still work to be done to Flood to generate pools at lower levels, but the consummate behavior of just allocating memory when you need it without cleaning up (since you can wait for when the pools will be cleaned up at a higher stage) is a practice bordering on disaster. I was also wondering whether the APR (current or from about a year a ago) has been tuned to prevent memory leaks or whether our design currently enforces this. I suspect that not much research has been done here. The goal of this email is not to knock the current development on Flood but is asking for help in resolving an issue we are facing. If we can overcome ours, we can also help Flood overcome its own potential issues as well. I appreciate any responses. -Norman Tuttle, software developer, OpenDemand Systems [EMAIL PROTECTED]
[STATUS] (flood) Wed Nov 12 23:45:48 EST 2003
flood STATUS: -*-text-*- Last modified at [$Date: 2003/07/01 20:55:12 $] Release: 1.0: Released July 23, 2002 milestone-03: Tagged January 16, 2002 ASF-transfer: Released July 17, 2001 milestone-02: Tagged August 13, 2001 milestone-01: Tagged July 11, 2001 (tag lost during transfer) RELEASE SHOWSTOPPERS: * "Everything needs to work perfectly" Other bugs that need fixing: * I get a SIGBUS on Darwin with our examples/round-robin-ssl.xml config, on the second URL. I'm using OpenSSL 0.9.6c 21 dec 2001. * iPlanet sends "Content-length" - there is a hack in there now to recognize it. However, all HTTP headers need to be normalized before checking their values. This isn't easy to do. Grr. * OpenSSL 0.9.6 Segfaults under high load. Upgrade to OpenSSL 0.9.6b. Aaron says: I just found a big bug that might have been causing this all along (we weren't closing ssl sockets). How can I reproduce the problem you were seeing to verify if this was the fix? * SEGVs when /tmp/.rnd doesn't exist are bad. Make it configurable and at least bomb with a good error message. (See Doug's patch.) Status: This is fixed, no? * If APR has disabled threads, flood should as well. We might want to have an enable/disable parameter that does this also, providing an error if threads are desired but not available. * flood needs to clear pools more often. With a long running test it can chew up memory very quickly. We should just bite the bullet and create/destroy/clear pools for each level of our model: farm, farmer, profile, url/request-cycle, etc. * APR needs to have a unified interface for ephemeral port exhaustion, but aparently Solaris and Linux return different errors at the moment. Fix this in APR then take advantage of it in flood. * The examples/analyze-relative scripts fail when there are less than 5 unique URLs. Other features that need writing: * More analysis and graphing scripts are needed * Write robust tool (using tethereal perhaps) to take network dumps and convert them to flood's XML format. Status: Justin volunteers. Aaron had a script somewhere that is a start. Jacek is working on a Mozilla application, codename "Flood URL bag" (much like Live HTTP Headers) and small HTTP proxy. * Get chunked encoding support working. Status: Justin volunteers. He got sidetracked by the httpd implementation of input filtering and never finished this. This is a stopgap until apr-serf is completed. * Maybe we should make randfile and capath runtime directives that come out of the XML, instead of autoconf parameters. * We are using apr_os_thread_current() and getpid() in some places when what we really want is a GUID. The GUID will be used to correlate raw output data with each farmer. We may wish to print a unique ID for each of farm, farmer, profile, and url to help in postprocessing. * We are using strtol() in some places and strtoll() in others. Pick one (Aaron says strtol(), but he's not sure). * Validation of responses (known C-L, specific strings in response) Status: Justin volunteers * HTTP error codes (ie. teach it about 302s) Justin says: Yeah, this won't be with round_robin as implemented. Need a linked list-based profile where we can insert new URLs into the sequence. * Farmer (Single-thread, multiple profiles) Status: Aaron says: If you have threads, then any Farmer can be run as part of any Farm. If you don't have threads, you can currently only run one Farmer named "Joe" right now (this will be changed so that if you don't have threads, flood will attempt to run all Farmers in serial under one process). * Collective (Single-host, multiple farms) This is a number of Farms that have been fork()ed into child processes. * Megaconglomerate (Multiple hosts each running a collective) This is a number of Collectives running on a number of hosts, invoked via RSH/SSH or maybe even some proprietary mechanism. * Other types of urllists a) Random / Random-weighted b) Sequenced (useful with cookie propogation) c) Round-robin d) Chaining of the above strategies Status: Round-robin is complete. * Other types of reports Status: Aaron says: "simple" reports are functional. Justin added a new type that simply prints the approx. timestamp when the test was run, and the result as OK/FAIL; it is called "easy reports" (see flood_easy_reports.h). Fur
[STATUS] (perl-framework) Wed Nov 12 23:45:51 EST 2003
httpd-test/perl-framework STATUS: -*-text-*- Last modified at [$Date: 2002/03/09 05:22:48 $] Stuff to do: * finish the t/TEST exit code issue (ORed with 0x2C if framework failed) * change existing tests that frob the DocumentRoot (e.g., t/modules/access.t) to *not* do that; instead, have Makefile.PL prepare appropriate subdirectory configs for them. Why? So t/TEST can be used to test a remote server. * problems with -d perl mode, doesn't work as documented Message-ID: <[EMAIL PROTECTED]> Date: Sat, 20 Oct 2001 12:58:33 +0800 Subject: Re: perldb Tests to be written: * t/apache - simulations of network failures (incomplete POST bodies, chunked and unchunked; missing POST bodies; slooow client connexions, such as taking 1 minute to send 1KiB; ...) * t/modules/autoindex - something seems possibly broken with inheritance on 2.0 * t/ssl - SSLPassPhraseDialog exec: - SSLRandomSeed exec: