Thinking about large POST buffers, I've just got another idea. Mongoose can spool POST requests to temporary files, but instead of passing a FILE * pointer, it can memory map the file and give a memory address as if it was malloc-ed. This way, the API wouldn't change at all, and existing mg_parse_multipart() can be used.
There would be a limitation on a POST size on 32-bit systems, cause multi-gigabyte files could not be fully mapped. Environments without a filesystem cannot do file IO, therefore temporary file approach wouldn't work. In such a case, Mongoose can do memory buffering by default, falling back to temporary files only for large POSTs. A threshold could be configured, so the solution seems to suit all cases. -- You received this message because you are subscribed to the Google Groups "mongoose-users" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. Visit this group at http://groups.google.com/group/mongoose-users. For more options, visit https://groups.google.com/groups/opt_out.
