When you call ns_queryget/ns_getform, NS parses the input and split content into temporaty file in case of attachements. When sending multipart/data content and i called ns_queryget at the beginning of my connection script, parser needs all the data available, big attachments can be the first one and other query parameters at the end. How lazy
parsing will happen? Will my ns_queryget wait while the content
is being downloaded?

Stephen Deasey wrote:
On 6/16/05, Zoran Vasiljevic <[EMAIL PROTECTED]> wrote:

Hi!

I ran into several problems while debuggging a problem at the
customer site related to uploading files and size of those files.

As it looks like, the NS/AS spools all the incoming content into
the memory (urks!). Depending on the content-type, the content
may be parsed, as in case of multipar/form-data.

This is very convenient (you need not fiddle with the low-level
stuff) but it can be extremely inefficient and bloat the memory
footprint of the server if somebody likes to upload 10's or 100's
of MB of data.

I have examined the recent changes in the AS done by Jim which
tackle this issue. He's basically using the "maxinput" config
option to decide wether to stuff everything into memory (as usual)
or, in the case input exceeds the maxinput level,  stuff the
input into a temporary file. But... the programmer is then left
alone and has to parse the incoming input by himself by using
ns_conn contentchannel command to read the content.
Even more, the form machinery which is basically parsing the
multipart data still operates on the connPtr->content which
does not reflect the entire content (the excessive is spooled
into a temp file!).

Hm... this seems pretty unusable for this particular case to me.

Therefore, and as usual, I have a idea/suggestion how to improve
this...

I would take the same direction as Jim did, but will simply mmap the
temp-file to the connPtr->content! This will make all the rest
of the code work as usual. On the connection end, the file will
be unmmaped and there you go.

One can even explore the memmap machinery and see if we can
entirely drop the temp-file and use the system paging for that:
so if the input exceeds the maxinput, we just mmap /dev/zero file
I believe something like that should be possible but will have to
double-check if this is true.



It was at one point implemented with mmap:

http://cvs.sourceforge.net/viewcvs.py/aolserver/aolserver/nsd/driver.c?rev=1.34&view=markup


Spooling large requests to disk is clearly necessary.  Almost always,
a large request needs to be stored in a file server-side anyway, so it
might as well go there from the start.

The only problem I see is that the calls to open a file and write
content to disk are blocking, and they happen in the context of the
driver thread which is busy multiplexing many non-blocking operations
on sockets.  Should one of the calls used to spool to disk block,
everything comes to a stop.

The version 3.x model was to read-ahead up to the end of headers in
the driver thread, then pass the conn to a conn thread.  As far as I
remember, the conn thread would then spool all content to disk, but
this could have been on-demand and it might have been only file-upload
data. A programmer could then access the spooled data using
Ns_ConnRead(), Ns_ConnReadLine() etc., or they would be called for you
by ns_getform etc.  The thing to note however is that all the blocking
calls happened in separate conn threads.

In early version 4.0 the model was changed so that the driver thread
would read-ahead all data (up to Content-Length bytes) before the conn
was passed to a conn thread.  In a point release a limit was
introduced to avoid the obvious DOS attack.  This allowed an easier
interface to the data for programmers: Ns_ConnContent() which returns
a pointer to an array of bytes.  Ns_ConnReadLine() etc. are no longer
used and are currently broken.

Version 4.1 work seems to be trying to tackle the problem of what
happens when large files are uploaded.  The version 4.0 model work
extremely well for HTTP POST data up to a few K in size, but as you've
noticed it really bloats memory when multi-megabyte files are
uploaded.  This version also introduces limits, which are a
URL-specific way of pre-declaring the maxupload size and some other
parameters.


So anyway, here's my idea of how it should work:

There's a maxreadahead parameter which is <= maxinput.  When a request
arrives with Content-Length > 0 && < maxinput, maxreadahead bytes are
read into a buffer by the driver thread before the conn is passed to a
conn thread.
The conn thread runs the registered proc for that URL. If that
procedure does not try to acces the content, then when the conn is
returned to the driver thread any content > maxreadahead is drained.

If the registered proc does try to access the content via e.g.
Ns_ConnContent() (and I think this would be the main method, used
underneath by all others) and the content is <= maxreadahead, then a
pointer to the readahead buffer is returned.

If the content is accessed and it is > maxreadahead, a temp file is
mmaped, the readahead buffer is dumped to it, and the remaining bytes
are read from the socket, possibly blocking many times, and dumped to
the mmaped file, again possibly blocking.  A call to Ns_ConnContent()
returns a pointer to the mmaped bytes of the open file.

At any time before the registered proc asks for the content it can
check the content length header and decide whether it is too large to
accept.  You could imagine setting a low global maxinput, and a call
such as Ns_ConnSetMaxInput() which a registered proc could call to
increase the limit for that connection only.  The advantage over the
limits scheme in 4.1 is that the code which checks the size of the
content and processes it is kept together, rather than having to
pre-decalare maxinput sizes for arbitrary URLs in the config file.

This is similar to 4.1 except the task of overflowing to disk is moved
to the conn thread and is lazy.  The mental model is also slightly
different, the assumption is that content goes to disk, but theres a
cache for read-ahead which may be enough to hold everything.  In 4.1
it's everything is read into memory, unless it's large in which case
it overflows to disk.


How does that sound?


-------------------------------------------------------
SF.Net email is sponsored by: Discover Easy Linux Migration Strategies
from IBM. Find simple to follow Roadmaps, straightforward articles,
informative Webcasts and more! Get everything you need to get up to
speed, fast. http://ads.osdn.com/?ad_idt77&alloc_id492&opÌk
_______________________________________________
naviserver-devel mailing list
naviserver-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/naviserver-devel

--
Vlad Seryakov
571 262-8608 office
[EMAIL PROTECTED]
http://www.crystalballinc.com/vlad/

Reply via email to