On Tue, 27 Nov 2001, William A. Rowe, Jr. wrote:
Guys, we're not communicating worth a damn. :) I think we're proabably all saying the same things without realizing it. > > > Bill's idea was to map and then unmap the individual sections of the > > > file, so that no more than one section was mapped at a time. > > No, properly I implied that we have no more sections mapped than we are > actually interested in, at any given time. Right. And you can be interested in more than one at a time. As soon as one filter becomes interested in a section of the file, the whole server is interested until that section goes out onto the network or its bucket gets deleted, in which case we don't care about it anymore by definition. > > It might not be exactly what he said, but I thought it's what he _meant_. > > Ahh, don't you love the expressiveness of email? :-) One at a time is > > typically what you get with the code I posted anyhow since most filters > > limit how much they'll buffer, but the one-at-a-time rule is just not > > enforced. If it were enforced, then oh yeah, that would be bad. :) > > No, I really implied that we do unmap those that are consumed. If a really > bad filter reads in all 200 buckets of 4MB each, then the system will start > flailing. But no filter author would create such a design, no :-? > If you read from the bucket (mmap) and pass it on, the memory footprint > won't get out of hand. That's what I meant! :) Yes there is a pathological case where you'll try to mmap a whole huge 200MB file at once because some jackass (eg the content-length filter) buffered the whole damned thing, but at some point, the OS will say "you can't mmap that much stuff all at the same time", and we'll gracefully fall back on the read-8KB-at-a-time method. (If we're saying that we want to be more restrictive about how much crap we can mmap at a time system-wide, I say that's up to APR to decide for us if it doesn't already, in which case this is still the right approach from apr-util's perspective.) Are we on the same page yet? I hope? If so, does the patch I sent in earlier implement that page? I think it does. All it says is that if the file is bigger than APR_MMAP_LIMIT, then we try to mmap a limit's worth of the file. If that succeeds, we split the file bucket into two parts, the first part is APR_MMAP_LIMIT in size, and the other part is the rest of the file. We turn the first of those two into an mmap bucket, shoving in the mmap we just created. If the mmap fails, we abort and fall back on the read-8KB method. If the file is less than APR_MMAP_LIMIT in size, we do the same thing we did before, ie, MMAP the whole file. When all references to a given mmap go away, we delete the mmap. If we're the owners of the MMAP (which we will be in this case), that means we munmap the region. --Cliff -------------------------------------------------------------- Cliff Woolley [EMAIL PROTECTED] Charlottesville, VA
