Hello everyone,

I am new to this list, but urgently in need of a solution for a problem 
i am currently facing.
First, i'll give a description of my current platform:
Linux 2.4.18 (origninally slackware, heavily modified)
apache 1.3.22
php 4.1.2
mysql 3.23.46
(i am aware that these are not the latest versions)
php is compiled staticly in apache.

The problem is a little script we use that puts out files to the user. 
The code used is as follows:

                if( $fp = fopen( $images_base.$show, "rb" ) ){
                        while( !feof( $fp ) ){
                                print( fread( $fp, 4096 ) );
                                flush();
                        }
                }
                fclose($fp);

this works perfectly, until a few days ago. The files served were always 
<2MB, but last week, we needed to use a 100MB file, and now it appears 
that before starting the output, the httpd process grows up to 108MB+, 
and then starts output. Offcourse this is not very friendly on our 
hardware, since multiple clients are sometimes downloading the files, 
which gives the server loads of up to 100, and the kernel starts killing 
httpd's to free up memory.
Does anyone know how i can get httpd to stop 'caching' the entire file? 
i tried to add the flush() call, it didn't work, i tried the readfile() 
and fpassthru() calls, same result. I looked through the manuals at 
php.net and apache.org, no luck. I tried to find a workaround using 
apache, no luck.

So as my last option i am turning to this list.
Thank you for your time,
-- Joost


-- 
PHP Development Mailing List <http://www.php.net/>
To unsubscribe, visit: http://www.php.net/unsub.php

Reply via email to