Brandon Fosdick wrote:
> It does seem like a rather small and arbitrary limit. I can't think of
what else besides apache would cause it, but I could be missing something. The files are being dumped into mysql in 64K blocks. The machine is an amd64, so that shouldn't be a problem, and 700MB isn't near 2 or 4 GB anyway. Uploading from a cable modem doesn't go anywhere near saturating the disk, cpu, or network. I've tried OSX, Win2k and WinXP, all with the same result. I'm running out of things to check. Any suggestions? I guess it could be a limit in mod_dav itself. I'm afraid to go there...it looks messy.


In the interest of posterity and curiosity I played with 2.0.55 a little bit 
more, and I thought I'd document it here in case anyone cares.

My test setup includes a server running FreeBSD 6.1-PRE on a Sempron 3100+ and 
a PowerBookG4 (Tiger 10.4.5) acting as the client. I wrote a mod_dav storage 
provider that uses mysql as a back end. Files are stored as a series of 64K 
records.

Previously I had reported upload failure of files greater than ~700MB. On a 
hunch I took half the RAM out of the server, just to see if it's a memory 
problem. Sure enough, the limit appears to be in the 300MB - 400MB range now 
(specifically, a win2k iso). But that still seemed odd, since from previous 
testing I had determined that httpd/mod_dav streams the files to my provider in 
2K chunks, which are then assembled into 64K chunks and written to the 
database. It didn't seem likely that I had enough bytes buffered at any one 
time to cause memory starvation. Plus, I have more than enough swap configured.

So I redid the test with top running (for lack of a better idea) and a debug 
entry going to the error log for each chunk of bytes given to the storage 
provider. Bytes are still being provided in 2K chunks. I observed the following 
sequence:

1. OSX transfer window opens, progress bar proceeds rapidly to completion, and then says 
"closing file", which it continues to say throughout the transfer. httpd and 
mysqld are using 0% of the CPU and 0% of disk i/o. No log entries.

2. httpd jumps to using ~20% of the CPU and 100% of disk i/o. Debug log entries 
begin. Entries report 2K chunks.

3. 1-2 minutes later, OSX reports "server disconnected". mod_dav is still 
streaming 2K chunks, httpd is still using 100% disk i/o.

4. 1-2 minutes later, httpd client appears to crash and respawn. It appears to 
either restart the transfer or re-stream the file to storage, but at this point 
the log file becomes corrupt.

5. If I cancel the transfer at this point at least one of the httpd processes 
starts using 100% CPU and 100% disk i/o. Then I killall httpd.

For completeness, I re-ran the above test without the debug logging and w/o 
mod_security and observed the same behavior.

At this point I'm not sure if I should bother trying the large file hack for 
2.0.55 or just start migrating to 2.2.x. This no longer seems to be a large 
file problem, but I'm not sure what kind of problem it is. Judging by the 
mod_security audit log, the client doesn't appear to be doing anything odd. 
That is, it's just issuing a LOCK and a PUT for each file upload. Naturally, my 
own code is suspect as well, but it never sees anything bigger than 64K, and it 
can handle larger files when I haven't taken half the RAM out.

I'm stumped.

Unless anyone has a better idea I'll just give 2.2 a try and see what happens.

Reply via email to