On 30/05/06, Tobias Ulmer <[EMAIL PROTECTED]> wrote:

Thank you very much for the reply!

also make sure that your buffers are large enough for all possible
circumstances.

I am concerned for the cases where URL given by the cliend side is like 2MB.

In my understanding, there is a gap between the server opening a socket
for the connection and starting reading in the data from the client until
the end of that readining-in when server stores the info about the request
in the env variables.  So if URL is very big, it would be first transfered to
the httpd cache and httpd would determine the CONTENT_LENGTH and
would store that info in the env of the httpd, right?

So my cgi.c aren't so totally directly exposed to the net, are they?



As far as I have learned in the src/usr.sbin/httpd/src/include/httpd.h it says
that
#ifndef DEFAULT_LIMIT_REQUEST_LINE
#define DEFAULT_LIMIT_REQUEST_LINE 8190
#endif /* default limit on bytes in Request-Line (Method+URI+HTTP-version) */
#ifndef DEFAULT_LIMIT_REQUEST_FIELDSIZE
#define DEFAULT_LIMIT_REQUEST_FIELDSIZE 8190
#endif /* default limit on bytes in any one header field  */
#ifndef DEFAULT_LIMIT_REQUEST_FIELDS
#define DEFAULT_LIMIT_REQUEST_FIELDS 100
#endif /* default limit on number of request header fields */

/* Limits on the size of various request items.  These limits primarily
* exist to prevent simple denial-of-service attacks on a server based
* on misuse of the protocol.  The recommended values will depend on the
* nature of the server resources -- CGI scripts and database backends
* might require large values, but most servers could get by with much
* smaller limits than we use below.  The request message body size can
* be limited by the per-dir config directive LimitRequestBody.

However, I have not found this LimitRequestBody in the default httpd.conf.
Is it like extra option that will be understood in case of its presence in the
<Directory> on per-dir config?



* Internal buffer sizes are two bytes more than the DEFAULT_LIMIT_REQUEST_LINE
* and DEFAULT_LIMIT_REQUEST_FIELDSIZE below, which explains the 8190.
* These two limits can be lowered (but not raised) by the server config
* directives LimitRequestLine and LimitRequestFieldsize, respectively.

Does this really mean that URL more than 8190 bytes would be rejected?
Or I am mixing something here?



* DEFAULT_LIMIT_REQUEST_FIELDS can be modified or disabled (set = 0) by
* the server config directive LimitRequestFields.

If disabled, it would not check it at all, right? What would be the
limits in that case?
In src/usr.sbin/httpd/src/main/http_core.c lim is an int type, so
would the limit
be only the int type limit? :

static const char *set_limit_req_fieldsize(cmd_parms *cmd, void *dummy,
                                          char *arg)
{
   const char *err = ap_check_cmd_context(cmd,
                                          NOT_IN_DIR_LOC_FILE|NOT_IN_LIMIT);
   int lim;

   if (err != NULL) {
       return err;
   }
   lim = atoi(arg);
   if (lim < 0) {
       return ap_pstrcat(cmd->temp_pool, "LimitRequestFieldsize \"", arg,
                         "\" must be a non-negative integer (0 = no limit)",
                         NULL);
   }
   if (lim > DEFAULT_LIMIT_REQUEST_FIELDSIZE) {
       return ap_psprintf(cmd->temp_pool, "LimitRequestFieldsize \"%s\" "
                         "must not exceed the precompiled maximum of %d",
                          arg, DEFAULT_LIMIT_REQUEST_FIELDSIZE);
   }
   cmd->server->limit_req_fieldsize = lim;
   return NULL;
}
From this, is there any way to handle what is being shown (not 500 I guess?) in
case of matching the  if (lim > DEFAULT_LIMIT_REQUEST_FIELDSIZE) { ? I would
also like to determine the IPs of those requests for blacklisting or so.
(Please feel free to ignore it if its a lazy question)



> 2. In the CGI context, do fgets(input, len+1, stdin) and
> fread(buff, contentlength, 1, stdin) make a difference?

fgets terminates buff with a '\0', wich is imho better than plain
fread. otoh, both are ok if buff is large enough(!) and you know
what you're doing.

How is buffer allocating handled in the kernel? Does it really allocates the
memory at once of waits until the actual data flow-in. I mean if kernel really
frees and prepares those chunks at once, server load could really increase
only by getting the requests and giving space for each of the processes if
I choose to be on the safe side and use really big buffers.
I know this is a newbie question, but I have to now this; sorry.


yes ;) if there is a buffer overflow, it's in your cgi, not in httpd.

but it can be prevented by interpreting the env variables info correctly
I hope:)

httpd's chroot can prevent an attacker from getting a shell and do more
harm. Depending on your application, he can still do a lot of damage to
your application or to other clients (XSS attacks for example).

Thank you for the example.

read the superb manpages of the functions you want to use. there are
often examples and pointers to things you should do/not do.

Yes, the man pages are really superb! I do search in them, I do enjoy
reading them.

All OpenBSD technical writers, thank you for your time while preparing them!



I had asked only because I wanted to get the logic of the usage of those
particular functions instead of others in this particular context.

I'm sure i forgot tons of stuff :)

Still - good place for me to start. Really thank you.


Tobias

Vladas

Reply via email to