Andre,
That is what I'm currently doing:
$request->content_type("multipart/x-mixed-replace;boundary=\"$this->{boundary}\";");
and then each chuck of prints looks like this (no length specified):
for (<some condition>) {
$request->print("--$this->{boundary}\n");
$request->print("Content-type: text/html; charset=utf-8;\n\n");
$request->print("$data\n\n");
$request->rflush;
}
And the result is endless memory growth in the apache process. Is that what you
had in mind?
On Mar 16, 2010, at 12:50 PM, André Warnier wrote:
> Pavel Georgiev wrote:
> ...
>> Let me make I'm understanding this right - I'm not using any buffers myself,
> all I do is sysread() from a unix socked and print(),
> its just that I need to print a large amount of data for each request.
>>
> ...
> Taking the issue at the source : can you not arrange to sysread() and/or
> print() in smaller chunks ?
> There exists something in HTTP named "chunked response encoding"
> (forgive me for not remembering the precise technical name). It
> consists of sending the response to the browser without an overall
> Content-Length response header, but indicating that the response is
> chunked. Then each "chunk" is sent with its own length, and the
> sequence ends with (if I remember correctly) a last chunk of size zero.
> The browser receives each chunk in turn, and re-assembles them.
> I have never had the problem myself, so I never looked deeply into it.
> But it just seems to me that before going off in more complicated
> solutions, it might be worth investigating.
>
>