Re: CRL verification in mod_ssl

2008-09-15 Thread Nicob
Le samedi 30 août 2008 à 14:50 +0200, Nicob a écrit :
> It implements the matching on the Authority DN (vs. Authority
> Key ID actually) during client certificate verification against a CRL
> *and* a required test during CRL validation, as described in paragraph
> 6.3.3 of RFC 3280

So, do you think that this patch could be included ?

If not, I plan to open two bug reports :
- one about the matching on the Authority DN (missing feature)
- one about the non verification of the key usage of the issuer
(security bug)

Note about the patch : it could also check at line 68 the return value
of BIO_read() before writing the NULL, even if this code is executed
only in debug mode.

Regards,
Nicob



Re: Anyone here with knowledge of MPM event?

2008-09-15 Thread Paul Querna

Rustam Abdullaev wrote:
"Paul Querna" <[EMAIL PROTECTED]> wrote in message 
news:[EMAIL PROTECTED]
The event mpm is likely the right place to start since it already have a 
concept of suspendable 'conections', but how it is done is specific to 
HTTP., However, the problems with making a suspendable *request* are much 
deeper in the other parts of the core, and less of a problem in the MPMs.

...


Thanks for the info! Now that I think about it, I could actually live with 
suspendable connections (not requests).


Is it possible to add some flags to the EOS metadata bucket? For example, to 
have a 'continuation' EOS bucket, which would cause the MPM to 'replay' the 
original http request after a certain time (or on demand). In essence there 
would be 2 separate requests, but they would share the same connection.


To me this looks like a fairly small change to the overall architecture. 
Most changes would be inside the Event MPM.


Thoughts? Is this possible? 


What you are describing is HTTP Keep Alive[1], and the Event MPM already 
does this :-)


-Paul

[1] - http://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html





Re: Anyone here with knowledge of MPM event?

2008-09-15 Thread Rustam Abdullaev
"Paul Querna" <[EMAIL PROTECTED]> wrote in message 
news:[EMAIL PROTECTED]
>
> The event mpm is likely the right place to start since it already have a 
> concept of suspendable 'conections', but how it is done is specific to 
> HTTP., However, the problems with making a suspendable *request* are much 
> deeper in the other parts of the core, and less of a problem in the MPMs.
> ...

Thanks for the info! Now that I think about it, I could actually live with 
suspendable connections (not requests).

Is it possible to add some flags to the EOS metadata bucket? For example, to 
have a 'continuation' EOS bucket, which would cause the MPM to 'replay' the 
original http request after a certain time (or on demand). In essence there 
would be 2 separate requests, but they would share the same connection.

To me this looks like a fairly small change to the overall architecture. 
Most changes would be inside the Event MPM.

Thoughts? Is this possible? 





Re: Anyone here with knowledge of MPM event?

2008-09-15 Thread Paul Querna

Rustam Abdullaev wrote:

Hi all,

I need some help with the (experimental) event MPM. I want to extend it to 
support suspendable requests (for comet). Any help is appreciated!





Hi Rustam,

Funny that you mention suspendable requests, I've been thinking about it 
for other reasons recently :-)


The event mpm is likely the right place to start since it already have a 
concept of suspendable 'conections', but how it is done is specific to 
HTTP., However, the problems with making a suspendable *request* are 
much deeper in the other parts of the core, and less of a problem in the 
MPMs.


The first major problem is the filter stack and how handlers are invoked.

A content handler, writes to the filter stack directly, each filter 
invokes the next in the chain, from within its own function.


A filter can choose to buffer, or do whatever it likes with the content.

When a handler is done, it inserts an EOS bucket, which is supposed to 
tell all filters to flush, since it is the end of the current request.


I think the first step, which while complicated, might be the easiest 
way to demo this, is to make a custom static file handler, which would 
write the file at a constant bitrate, ie, 500 kbs.


To do this you would write a file bucket of X length, insert a 'fake' 
EOS, and write it like normal.  You would also have a custom filter that 
would remove the early EOS before it hit the core network filter.  Once 
the first 'pulse' happens, you would then suspend the request for Z 
milliseconds, and modify the event mpm to re-invoke the handler at the 
desired time.


I think the easiest way would be to add a return value from the handler, 
 SUSPEND, which the core would then not do anything extra with, 
assuming that the MPM will call the handler again at the desired time. 
Inside the handler, you would have for example, something like this:


 ap_mpm_query(AP_MPM_SUSPEND_REQUESTS, &sr);

 if (sr) {
   ap_mpm_suspend_request(r, your_function_callback, void_baton, 200ms);
   return SUSPEND;
 }
 else {
   return run_request_as_normal();
 }


Thats my basic thoughts, assuming you don't want to rewrite the entire 
filter chain, which is something I've been trying to avoid :-)


Thoughts?

Paul


Re: svn commit: r691418 [1/2] - in /httpd/httpd/trunk: ./ docs/manual/mod/ modules/filters/

2008-09-15 Thread Basant Kukreja
Hi,

Attached is the *rough* patch which uses transient buckets in mod_sed output
filter.

Testing :
  I created a 30MB and 300MB text files and ran OutputSed commands on the file.
* Before the patch, process size (worker mpm with 1 thread) increased up to 
300M 
for single request.  After the  patch, process size remains to 3MB to server
300M response output.

I also removed 1 extra copying for processing output.

I need to add some more error handling to finalize the patch. Any comments are
welcome.

Regards,
Basant.

On Thu, Sep 04, 2008 at 09:47:26PM -0500, William A. Rowe, Jr. wrote:
> Basant Kukreja wrote:
>>
>> Based on your suggestion, I will check what are the other improvements from
>> mod_substitute can be brought into mod_sed.
>
> Note that mod_substitute's brigade handling is already based on the work of
> both Jim and Nick (author of mod_line_edit) - so they are pretty certain
> that it is the right approach.  Good idea to borrow from it.
>
> Bill
Index: modules/filters/mod_sed.c
===
--- modules/filters/mod_sed.c   (revision 692768)
+++ modules/filters/mod_sed.c   (working copy)
@@ -26,7 +26,8 @@
 #include "libsed.h"
 
 static const char *sed_filter_name = "Sed";
-#define MODSED_OUTBUF_SIZE 4000
+#define MODSED_OUTBUF_SIZE 8000
+#define MAX_TRANSIENT_BUCKETS 50
 
 typedef struct sed_expr_config
 {
@@ -44,11 +45,14 @@
 typedef struct sed_filter_ctxt
 {
 sed_eval_t eval;
+ap_filter_t *f;
 request_rec *r;
 apr_bucket_brigade *bb;
 char *outbuf;
 char *curoutbuf;
 int bufsize;
+apr_pool_t *tpool;
+int numbuckets;
 } sed_filter_ctxt;
 
 module AP_MODULE_DECLARE_DATA sed_module;
@@ -71,29 +75,68 @@
 sed_cfg->last_error = error;
 }
 
+/* clear the temporary pool (used for transient buckets)
+ */
+static void clear_ctxpool(sed_filter_ctxt* ctx)
+{
+apr_pool_clear(ctx->tpool);
+ctx->outbuf = NULL;
+ctx->curoutbuf = NULL;
+ctx->numbuckets = 0;
+}
+
+/* alloc_outbuf
+ * allocate output buffer
+ */
+static void alloc_outbuf(sed_filter_ctxt* ctx)
+{
+ctx->outbuf = apr_palloc(ctx->tpool, ctx->bufsize + 1);
+ctx->curoutbuf = ctx->outbuf;
+}
+
+/* append_bucket
+ * Allocate a new bucket from buf and sz and append to ctx->bb
+ */
+static void append_bucket(sed_filter_ctxt* ctx, char* buf, int sz)
+{
+int rv;
+apr_bucket *b;
+if (ctx->tpool == ctx->r->pool) {
+/* We are not using transient bucket */
+b = apr_bucket_pool_create(buf, sz, ctx->r->pool,
+   ctx->r->connection->bucket_alloc);
+APR_BRIGADE_INSERT_TAIL(ctx->bb, b);
+}
+else {
+/* We are using transient bucket */
+b = apr_bucket_transient_create(buf, sz,
+ctx->r->connection->bucket_alloc);
+APR_BRIGADE_INSERT_TAIL(ctx->bb, b);
+ctx->numbuckets++;
+if (ctx->numbuckets >= MAX_TRANSIENT_BUCKETS) {
+b = apr_bucket_flush_create(ctx->r->connection->bucket_alloc);
+APR_BRIGADE_INSERT_TAIL(ctx->bb, b);
+rv = ap_pass_brigade(ctx->f->next, ctx->bb);
+apr_brigade_cleanup(ctx->bb);
+clear_ctxpool(ctx);
+}
+}
+}
+
 /*
  * flush_output_buffer
  * Flush the  output data (stored in ctx->outbuf)
  */
-static void flush_output_buffer(sed_filter_ctxt *ctx, char* buf, int sz)
+static void flush_output_buffer(sed_filter_ctxt *ctx)
 {
 int size = ctx->curoutbuf - ctx->outbuf;
 char *out;
-apr_bucket *b;
-if (size + sz <= 0)
+if ((ctx->outbuf == NULL) || (size <=0))
 return;
-out = apr_palloc(ctx->r->pool, size + sz);
-if (size) {
-memcpy(out, ctx->outbuf, size);
-}
-if (buf && (sz > 0)) {
-memcpy(out + size, buf, sz);
-}
-/* Reset the output buffer position */
+out = apr_palloc(ctx->tpool, size);
+memcpy(out, ctx->outbuf, size);
+append_bucket(ctx, out, size);
 ctx->curoutbuf = ctx->outbuf;
-b = apr_bucket_pool_create(out, size + sz, ctx->r->pool,
-   ctx->r->connection->bucket_alloc);
-APR_BRIGADE_INSERT_TAIL(ctx->bb, b);
 }
 
 /* This is a call back function. When libsed wants to generate the output,
@@ -104,11 +147,38 @@
 /* dummy is basically filter context. Context is passed during invocation
  * of sed_eval_buffer
  */
+int remainbytes = 0;
 sed_filter_ctxt *ctx = (sed_filter_ctxt *) dummy;
-if (((ctx->curoutbuf - ctx->outbuf) + sz) >= ctx->bufsize) {
-/* flush current buffer */
-flush_output_buffer(ctx, buf, sz);
+if (ctx->outbuf == NULL) {
+alloc_outbuf(ctx);
 }
+remainbytes = ctx->bufsize - (ctx->curoutbuf - ctx->outbuf);
+if (sz >= remainbytes) {
+if (remainbytes > 0) {
+memcpy(ctx->curoutbuf, buf, remainbytes);
+buf += remainbytes;
+sz -= remainbytes;
+ctx->curoutbuf += remainb