I've noticed that when downloading large files (200MB+) via webdav, there is a long delay in the request prior to the first response happening. I think the problem is that the webdav servlet is first copying the file out of the datastore into a temporary file, and then using that temporary file as the source for streaming the response to the client.
1. The DefaultHandler IOHandler spools the data from the content Node to the ExportContext's output stream 2. The ExportContext simply uses a temporary file to write the data to 3. Once written to a temp file the ExportContext then spools the content of the file to the OutputContext's output stream (the HTTP response). So in the case of a large file, the entire content must be written to a temp file before any data is sent to the HTTP response, taking up both extra time for both parties, and disk space on the server. In our case, we use the File DataStore, meaning the content is already in a file. Can this behavior be changed in that situation? I.e. have the ExportContextImpl simply spool the content to the OutputContext's output stream? I'm not familiar with the Jackrabbit code enough to see if this is something that could be changed via configuration, or if it is simply an enhancement to the webdav code that could add this efficiency. I think this may also be part of the problem discussed in this thread: http://www.nabble.com/Problems-storing-accessing-very-large-files-via-WebDAV-td23307497.html#a23318154 Filed as http://issues.apache.org/jira/browse/JCR-2173 Thanks, Greg -- Greg Schueler ControlTier Software, Inc [email protected] 650-292-9660x709 http://www.controltier.com
