Re: Large file support in 2.0.56?

2006-04-22 Thread Brandon Fosdick

Brandon Fosdick wrote:
If my theory is correct, then I think the solution is to find a way to 
stream data to the storage provider earlier in the request process. I 
don't know if that's a core issue, or just some config bits in mod_dav, 
or my provider, that need to be fiddled. It's odd that httpd buffers the 
whole thing and then mod_dav streams it in 2K chunks, so I've got a 
feeling there's something in mod_dav that needs tweaking.


More notes...

I found the part in mod_dav that streams the request body to the storage provider (see 
the Buckets and brigades thread). It reads a fixed 2K block from the input 
brigade and then passes a pointer to that block to the provider. Rinse and repeat until 
reaching EOS.

On a whim I tried changing 2K to 64K, just to see what would happen. Using 
mod_dav_fs with 2K blocks, the client will timeout after ~75MB have been 
written to disk. Using 64K blocks, ~90MB are written to disk.

Not a big difference, but it furthers my suspicion that this problem has more 
to do with timing than with file size. The amount of data written to disk 
appears to depend on the write speed as well as the patience of the client.


Re: Large file support in 2.0.56?

2006-04-21 Thread Brandon Fosdick

Plüm wrote:

Have you checked if you can write the files with the default mod_dav_fs 
provider to
the disk?


good suggestion, thanks...

Ok, same test setup that I posted about the other day, but this time I used mod_dav_fs. 


I'm getting slightly different behavior, in that the upload works in certain situations, 
but overall the same symptoms. If I don't cancel the transfer once the client says 
server disconnected, it will eventually finish. I can get this behavior 
reliably with mod_dav_fs on 2.2.0 (haven't tried 2.0.55). Using my provider I get crashes 
on 2.0.55, but it eventually finishes on 2.2.0. So that's progress, but still not great.

It looks to me, and I could be wrong, that httpd/mod_dav is trying to buffer 
the entire request before giving it to the provider. The client, after 
finishing the transfer, is expecting a response from the server, but httpd 
doesn't send a response until the provider has finished with the request. 
Therefore, the response is delayed long enough that the client thinks the 
server has disappeared, when in fact the server is simply busy. Apparently the 
client (OS X in this case) is smart enough to handle the late response, so long 
as the user hasn't done the obvious thing and clicked the cancel button on the 
error box that pops up. If the user does click cancel, the client sends a stop 
request and httpd/mod_dav stops streaming the file to the provider, leaving an 
incomplete/corrupt file on disk.

Does that make sense to anyone? I don't know enough about the request handling 
process to know if I'm anywhere close on this one.

If my theory is correct, then I think the solution is to find a way to stream 
data to the storage provider earlier in the request process. I don't know if 
that's a core issue, or just some config bits in mod_dav, or my provider, that 
need to be fiddled. It's odd that httpd buffers the whole thing and then 
mod_dav streams it in 2K chunks, so I've got a feeling there's something in 
mod_dav that needs tweaking.


Re: Large file support in 2.0.56?

2006-04-19 Thread Plüm , Rüdiger , VF EITO


 -Ursprüngliche Nachricht-
 Von: Brandon Fosdick 
 
 At this point I'm not sure if I should bother trying the 
 large file hack for 2.0.55 or just start migrating to 2.2.x. 
 This no longer seems to be a large file problem, but I'm not 
 sure what kind of problem it is. Judging by the mod_security 
 audit log, the client doesn't appear to be doing anything 
 odd. That is, it's just issuing a LOCK and a PUT for each 
 file upload. Naturally, my own code is suspect as well, but 
 it never sees anything bigger than 64K, and it can handle 
 larger files when I haven't taken half the RAM out.
 
 I'm stumped.

Have you checked if you can write the files with the default mod_dav_fs 
provider to
the disk?

Maybe this is a pool issue in your provider. Pool issue can cause
large memory grows.

Regards

Rüdiger


Re: Large file support in 2.0.56?

2006-04-18 Thread Joe Orton
On Mon, Apr 17, 2006 at 02:40:13PM +0100, Colm MacCarthaigh wrote:
 On Mon, Apr 17, 2006 at 09:09:12AM -0400, Jeff Trawick wrote:
  On 4/15/06, Brandon Fosdick [EMAIL PROTECTED] wrote:
   I might have asked this before, but I've forgotten the answer, and so has 
   google. Has any of the large file goodness from 2.2.x made it into 2.0.x? 
   Will it ever?
  
  Different answer than you got before, but I think this is more accurate 
  (Joe?):
  
  Turn on your OS's large file flags in CFLAGS
  
  make distclean  CFLAGS=-D_something ./configure

Specifically:

  CFLAGS=-D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 ./configure

should work with recent 2.0.x releases, in limited circumstances, may 
break with third-party modules, is not generally recommended, and when 
it breaks you get to keep the pieces, etc.  In particular this does not 
support 2Gb request bodies, which it sounds like Brandon wants; you 
really do need 2.2.x for that.

 That works, but does need Joe's split patch;
 
   http://people.apache.org/~jorton/ap_splitlfs.diff

That patch is actually in 2.0.53 and later.

Regards,

joe


Re: Large file support in 2.0.56?

2006-04-17 Thread Jeff Trawick
On 4/15/06, Brandon Fosdick [EMAIL PROTECTED] wrote:
 I might have asked this before, but I've forgotten the answer, and so has 
 google. Has any of the large file goodness from 2.2.x made it into 2.0.x? 
 Will it ever?

Different answer than you got before, but I think this is more accurate (Joe?):

Turn on your OS's large file flags in CFLAGS

make distclean  CFLAGS=-D_something ./configure

and you get the support. This isn't the default with APR 0.9.x (and
thus Apache httpd 2.0.x) because it breaks binary compatibility with
existing builds.  As long as you use only modules that you can
recompile and don't have bugs exposed only with large file support
enabled you should be okay.


Re: Large file support in 2.0.56?

2006-04-17 Thread Colm MacCarthaigh
On Mon, Apr 17, 2006 at 09:09:12AM -0400, Jeff Trawick wrote:
 On 4/15/06, Brandon Fosdick [EMAIL PROTECTED] wrote:
  I might have asked this before, but I've forgotten the answer, and so has 
  google. Has any of the large file goodness from 2.2.x made it into 2.0.x? 
  Will it ever?
 
 Different answer than you got before, but I think this is more accurate 
 (Joe?):
 
 Turn on your OS's large file flags in CFLAGS
 
 make distclean  CFLAGS=-D_something ./configure

That works, but does need Joe's split patch;

http://people.apache.org/~jorton/ap_splitlfs.diff

:)

-- 
Colm MacCárthaighPublic Key: [EMAIL PROTECTED]


Re: Large file support in 2.0.56?

2006-04-17 Thread William A. Rowe, Jr.

Brandon Fosdick wrote:

Nick Kew wrote:

I haven't tried files that size, but that's far too small for 
LARGE_FILE to

be relevant.  I guess you knew that already, so does something else
lead you to suppose you're hitting an Apache limit?


It does seem like a rather small and arbitrary limit. I can't think of 
what else besides apache would cause it, but I could be missing 
something. The files are being dumped into mysql in 64K blocks. The 
machine is an amd64, so that shouldn't be a problem, and 700MB isn't 
near 2 or 4 GB anyway. Uploading from a cable modem doesn't go anywhere 
near saturating the disk, cpu, or network. I've tried OSX, Win2k and 
WinXP, all with the same result. I'm running out of things to check. Any 
suggestions? I guess it could be a limit in mod_dav itself. I'm afraid 
to go there...it looks messy.


Well, the content length is stored as an int in httpd 2.0, so that also is
an issue (dav has so much metadata I've no idea how much real data can be
put up to the server.)

Keep in mind that you are gonna hit client bugs as well ;-)  At least httpd
version 2.2 should give you a good baseline to seperate the client from the
server issues.


Re: Large file support in 2.0.56?

2006-04-16 Thread Brandon Fosdick

Paul Querna wrote:

Is there a specific reason you can't use 2.2.x?


AAA screwiness. I ended up writing a custom auth module for 2.0.x, and last 
time I looked at porting it to 2.2.x my head nearly exploded. And, it seemed 
like there were still some changes in the works. Has all of that settled down 
yet?


Re: Large file support in 2.0.56?

2006-04-16 Thread Brandon Fosdick

William A. Rowe, Jr. wrote:

Actually, not entirely true.  There is some chewy goodness now in 2.0.x,
such as log files which can grow beyond 2GB, from an APR 0.9 APR_LARGE_FILE
hack.  It's a gross hack, which means we can't really provide all sorts of
large file manipulations, but logging, for example, is one of the most 
common

complaints.


hmmm...that doesn't help me much. I'm more interested in large files in 
mod_dav. Right now I can't upload anything much bigger than 700MB.

But any further work for Large Files in 2.0 is long abandoned for 
getting it

right in the first place, for 2.2.


Oh well. I guess I better go look at the auth stuff again. Thanks.


Re: Large file support in 2.0.56?

2006-04-16 Thread Joost de Heer
hmmm...that doesn't help me much. I'm more interested in large files in 
mod_dav. Right now I can't upload anything much bigger than 700MB.


IMO, that's not something a webserver should be used for anyway.

Joost


Re: Large file support in 2.0.56?

2006-04-16 Thread Colm MacCarthaigh
On Sun, Apr 16, 2006 at 10:28:10PM +0200, Joost de Heer wrote:
 hmmm...that doesn't help me much. I'm more interested in large files in 
 mod_dav. Right now I can't upload anything much bigger than 700MB.
 
 IMO, that's not something a webserver should be used for anyway.

I do it all of the time. We have users who upload DVD iso's to their
DAV shares. Can't see any reason why DAV shouldn't be capable of such
things.

-- 
Colm MacCárthaighPublic Key: [EMAIL PROTECTED]


Re: Large file support in 2.0.56?

2006-04-16 Thread Nick Kew
On Sunday 16 April 2006 20:41, Brandon Fosdick wrote:
 William A. Rowe, Jr. wrote:
  Actually, not entirely true.  There is some chewy goodness now in 2.0.x,
  such as log files which can grow beyond 2GB, from an APR 0.9
  APR_LARGE_FILE hack.  It's a gross hack, which means we can't really
  provide all sorts of large file manipulations, but logging, for example,
  is one of the most common
  complaints.

 hmmm...that doesn't help me much. I'm more interested in large files in
 mod_dav. Right now I can't upload anything much bigger than 700MB.

I haven't tried files that size, but that's far too small for LARGE_FILE to
be relevant.  I guess you knew that already, so does something else
lead you to suppose you're hitting an Apache limit?

Regarding your custom AAA module, it should be possible to use that
in 2.2.  If you don't want to port it, you can fall back to the 2.0
architecture.  At worst you may have to use 2.0 mod_auth and/or
mod_access alongside your module.

-- 
Nick Kew


Re: Large file support in 2.0.56?

2006-04-16 Thread Joost de Heer

Colm MacCarthaigh wrote:

On Sun, Apr 16, 2006 at 10:28:10PM +0200, Joost de Heer wrote:
hmmm...that doesn't help me much. I'm more interested in large files in 
mod_dav. Right now I can't upload anything much bigger than 700MB.

IMO, that's not something a webserver should be used for anyway.


I do it all of the time. We have users who upload DVD iso's to their
DAV shares. Can't see any reason why DAV shouldn't be capable of such
things.


There's a difference between 'being capable of' and 'being the proper tool for'.

Joost


Re: Large file support in 2.0.56?

2006-04-16 Thread Olaf van der Spek
On 4/16/06, Joost de Heer [EMAIL PROTECTED] wrote:
 Colm MacCarthaigh wrote:
  On Sun, Apr 16, 2006 at 10:28:10PM +0200, Joost de Heer wrote:
  hmmm...that doesn't help me much. I'm more interested in large files in
  mod_dav. Right now I can't upload anything much bigger than 700MB.
  IMO, that's not something a webserver should be used for anyway.
 
  I do it all of the time. We have users who upload DVD iso's to their
  DAV shares. Can't see any reason why DAV shouldn't be capable of such
  things.

 There's a difference between 'being capable of' and 'being the proper tool 
 for'.

Of course, but why wouldn't DAV be the proper tool for it?


Re: Large file support in 2.0.56?

2006-04-16 Thread Brandon Fosdick

Nick Kew wrote:

I haven't tried files that size, but that's far too small for LARGE_FILE to
be relevant.  I guess you knew that already, so does something else
lead you to suppose you're hitting an Apache limit?


It does seem like a rather small and arbitrary limit. I can't think of what 
else besides apache would cause it, but I could be missing something. The files 
are being dumped into mysql in 64K blocks. The machine is an amd64, so that 
shouldn't be a problem, and 700MB isn't near 2 or 4 GB anyway. Uploading from a 
cable modem doesn't go anywhere near saturating the disk, cpu, or network. I've 
tried OSX, Win2k and WinXP, all with the same result. I'm running out of things 
to check. Any suggestions? I guess it could be a limit in mod_dav itself. I'm 
afraid to go there...it looks messy.



Regarding your custom AAA module, it should be possible to use that
in 2.2.  If you don't want to port it, you can fall back to the 2.0
architecture.  At worst you may have to use 2.0 mod_auth and/or
mod_access alongside your module.


Ah, that I didn't know. Thanks.


Re: Large file support in 2.0.56?

2006-04-15 Thread Paul Querna

Brandon Fosdick wrote:
I might have asked this before, but I've forgotten the answer, and so 
has google. Has any of the large file goodness from 2.2.x made it into 
2.0.x?


no.


Will it ever?


no.

Several of the things require APR 1.x, and some of them break binary 
compat.  They will never be fixed in 2.0.x.


Is there a specific reason you can't use 2.2.x?

-Paul



Re: Large file support in 2.0.56?

2006-04-15 Thread William A. Rowe, Jr.

Paul Querna wrote:

Brandon Fosdick wrote:

I might have asked this before, but I've forgotten the answer, and so 
has google. Has any of the large file goodness from 2.2.x made it into 
2.0.x?


no.


Actually, not entirely true.  There is some chewy goodness now in 2.0.x,
such as log files which can grow beyond 2GB, from an APR 0.9 APR_LARGE_FILE
hack.  It's a gross hack, which means we can't really provide all sorts of
large file manipulations, but logging, for example, is one of the most common
complaints.

But any further work for Large Files in 2.0 is long abandoned for getting it
right in the first place, for 2.2.