On 11/18/2010 06:15 PM, Brian wrote:
One is to iterate over the filenames with subrequests (if this is even
possible/supported), so that each can be passed internally to a single
request as in the simple (single-file) handler described in the
example above. If the output of the subrequests can be captured then
they can be combined into a single response. That idea seems to be the
cleanest, if not the most efficient.
Although you could get them to work, I don't think sub-requests are your
answer. They run through all of the handler phases and are expected to
return full HTTP responses.
If that doesn't work then I can imagine iterating over the files with
calls to "sendfile()" and using a modified filter to guess at file
boundaries. However since the filter needs to be able to handle
binary content it can't do this by reading the data itself (nor should
it, since that's inefficient), but it could do so by counting bytes if
it knows the size of the files ahead of time, or some other
out-of-band signal like a "flush" bucket that indicates a file
boundary. However that solution seems messy and prone to error.
Because your out-of-band signal may be split across buckets, the
output-filter approach is probably not your answer either. Once again
it can be done, however introduces [seemingly] unneeded complexity. I
would say the same for tracking boundaries according to their offset.
Unless there is some constraint, the most straight-forward approach may
be to implement your routine to modify the file contents as they are
read from disk:
send_headers();
$r->print($content_header);
foreach my $path (@files) {
my $file = Your::FileFilter->new($path) or die;
$file->open or die;
while(my $buf = $file->read) {
$r->print($buf);
}
$file->close or die;
$r->rflush();
}
$r->print($content_footer);