Re: [idea] web-application security powered by SELinux

2009-04-01 Thread KaiGai Kohei
I considered the previous proposal might be a bit abstract one, so
it leads people confusion to find out what should be suggested.

I would like to focus more tangible issues at first, to gather
attention and have a fruitful discussion. :-)

The purpose of my proposition is to launch web-applications
with more restricted privileges set using SELinux. It enables
to prevent that buggy web application leaks or manipulates
confidential information, such as privacy, authentication info,
credit card number and so on.
In my opinion, two enhancements are necessary at least.

The one is identification of the client. It is necessary to decide
what privileges set to be assigned. If we choose a storategy to
associate http-username and its individual privileges, the simplest
idea is to use a configuration file to map a http-username and its
privilege (called as security context), as follows:
  
  # HTTP User and Security context
  #   the server process works with system_u:system_r:httpd_t:s0
  #   in the default security policy.
  #
  foo system_u:system_r:httpd_unpriv_webapp_t:s0
  var system_u:system_r:httpd_unpriv_webapp_t:s0
  baz system_u:system_r:httpd_admin_webapp_t:s0
  *   system_u:system_r:httpd_unauth_webapp_t:s0
  
Anyway, we have to decide what security context should be assigned
on the web-application launched based on the attribute of client.
Source IP addresses or authentication token can be candidates more
than http-username.

The other is assignment of the privileges set on the context
of web application. We need to execute ap_invoke_handler()
with individual and more restricted privilege, but it should
not affect any following requests.
Because the thread/process to handle the requests is used
again and again, we cannot assign restricted privilege directly.
Thus, my idea is to create a one-time-thread to execute web
application under the restrictive privilege. Please note that
there is no difference between a path to revert its privilege
and a risk of privilege escalation.
The parent side simply waits for the completion of the worker
thread. If we implement this feature as an external module,
the following steps will be necessary.
(I assume httpd-devel tree.)

1) It registers its handler as the ap_hook_process_request().

2) When it is invoked, the handler create a child thread then
   pass into a sleep until the child works.

3) The child assigns more restrictive privileges on itself
   and invokes ap_process_request() to launch web applications.
   Because its privileges are limited to necessity minimum,
   launched web application cannot access violated resources
   due to checks in the operating system.

4) The child thread exits, then the handler (parent) wake up.

5) The handler returns HTTP_OK to skip invocation of hardwired
   ap_process_request().

I guess you wonder its performance penalty, but it assume
the users of this feature don't give the highest priority
on performance. It is a security tradeoff.

I would like to see the opinion from hackers of httpd.
If we can achieve same or similar feature in another way,
please tell me your opinion.


BTW, we cannot apply the above storategy on the httpd-2.2.x
series because it does not have ap_run_process_request() hook
and ap_process_request() is commented out by CORE_PRIVATE.
I don't know the release policy in this community.
Is it impossible to backport these feature into the 2.2.x?

Thanks,

KaiGai Kohei wrote:
 Hello,
 
 Now I have considered the way to work web-applications with restrictive
 privileges set based on an identification of the client. It enables to
 check and prevent violated actions from (buggy) applications using
 features provided by the operating system.
 
 I'm concerned about most of web-application, such as PHP scripts, are
 launched as part of web-server process itself. It also means all the
 web-application instances share the same privilege set of the server
 process, even if these are invoked on the requests come from different
 users. In other word, we cannot apply valid access controls on them
 (except for ones applied by web-application itself, but it is hard to
 ensure they don't have security bugs), because it seems to the operating
 system multiple processes/threads with same privileges run simultaneously.
 
 If we can run web-applications with more restrictive privileges set
 for each users, groups and so on, the operating system can acquire
 any actions from userspace and apply its access control policies.
 I assume SELinux as the operating system feature here, but not
 limited to SELinux. I guess this discussion can be applied on any
 other advanced security features also.
 
 In my opinion, we need the following three facilities:
 
 1. The backend identifies the client and decide what privileges should
be assigned on the launched web-applications prior to its invocation.
The existing http-authentication is a candidate, but we don't 

Re: SNI in 2.2.x (Re: Time for 2.2.10?)

2009-04-01 Thread Plüm, Rüdiger, VF-Group
 

 -Ursprüngliche Nachricht-
 Von: Kaspar Brand 
 Gesendet: Montag, 30. März 2009 18:15
 An: dev@httpd.apache.org
 Betreff: Re: SNI in 2.2.x (Re: Time for 2.2.10?)
 
 Ruediger Pluem wrote:
  Going through the archive I noticed several attachments 
 with the same
  basename and and a version string attached. Are these patches
  cumulative so that I only need to review the latest one?
 
 sni_sslverifyclient-v5.diff includes all improvements to
 ssl_hook_Access/ssl_callback_SSLVerify/ssl_callback_SSLVerify_CRL
 which I did in June 2008, yes. Then I stopped updating the 
 trunk version
 (due to lack of responses) and only worked on further 
 improvements on to
 the 2.2.x patch (latest version lives at
 http://sni.velox.ch/httpd-2.2.x-sni.20080928.patch).


A question regarding your patch:

@@ -427,29 +435,26 @@ int ssl_hook_Access(request_rec *r)
  * function and not by OpenSSL internally (and our function is aware of
  * both the per-server and per-directory contexts). So we cannot ask
  * OpenSSL about the currently verify depth. Instead we remember it in our
  * ap_ctx attached to the SSL* of OpenSSL.  We've to force the
  * renegotiation if the reconfigured/new verify depth is less than the
  * currently active/remembered verify depth (because this means more
  * restriction on the certificate chain).
  */
-if ((sc-server-auth.verify_depth != UNSET) 
-(dc-nVerifyDepth == UNSET)) {
-/* apply per-vhost setting, if per-directory config is not set */
-dc-nVerifyDepth = sc-server-auth.verify_depth;
-}

Why don't you stick with the old approach of updating dc-nVerifyDepth and using
this later on consistently (the same happens with other fields in the same
way later on)?

-if (dc-nVerifyDepth != UNSET) {
+if ((dc-nVerifyDepth != UNSET) ||
+(sc-server-auth.verify_depth != UNSET)) {
 /* XXX: doesnt look like sslconn-verify_depth is actually used */
 if (!(n = sslconn-verify_depth)) {
 sslconn-verify_depth = n = sc-server-auth.verify_depth;
 }

 /* determine whether a renegotiation has to be forced */
-if (dc-nVerifyDepth  n) {
+if ((dc-nVerifyDepth  n) ||
+(sc-server-auth.verify_depth  n)) {
 renegotiate = TRUE;
 ap_log_rerror(APLOG_MARK, APLOG_DEBUG, 0, r,
   Reduced client verification depth will force 
   renegotiation);
 }
 }

 /*


Regards

Rüdiger


Re: Adopting mod_remoteip to modules/metadata/ ?

2009-04-01 Thread Graham Leggett

William A. Rowe, Jr. wrote:


I have essentially finished mod_remoteip at this point and am looking
to find out the interest level of adopting this as a core module into
trunk (modules/metadata/ appears to be the most appropriate target)?


+1.

I had to code up a similar feature recently in something that needed to 
know the end user's IP address, this will be very useful for apps behind 
load balancers and reverse proxies.



If I get enough +1's this week I'll move the module and whip up some docs,
but in the meantime, here's the experimental config I was working with;

RemoteIpHeader X-IP
RemoteIpProxiesHeader X-Via-IP

RemoteIPTrustedProxy 192.168.0. localhost/8
RemoteIPInternalProxy 192.168.1

RemoteIPInternalProxyList conf/internal.lst
RemoteIPTrustedProxyList conf/trusted-xff.lst

Header echo X-Via-IP
Header echo X-IP

(the trusted-xff.lst is from the wikimedia XFF project).


(Having not yet had a chance to look at the code) How is the possibility 
of multiple IPs in the same header handled, eg:


X-Fowarded-For: 10.2.3.4, 10.11.12.13

Regards,
Graham
--


smime.p7s
Description: S/MIME Cryptographic Signature


Re: 2.2.11 mod_include

2009-04-01 Thread Dan Poirier
Lars Eilebrecht l...@eilebrecht.net writes:

 Torsten Foertsch wrote:

 [mod_include DATE_LOCAL bug]
 Is this a known bug?

 It's probably this one:
 https://issues.apache.org/bugzilla/show_bug.cgi?id=39369

I think that's right.  It's a test for Joe's fix for 39369, that has
only been applied to trunk.  It would be nice to backport that fix so
the stable release doesn't fail tests (or else do something with that
test).

-- 
Dan Poirier poir...@pobox.com



Re: 2.2.11 mod_include

2009-04-01 Thread Torsten Foertsch
On Wed 01 Apr 2009, Dan Poirier wrote:
 Lars Eilebrecht l...@eilebrecht.net writes:
  Torsten Foertsch wrote:
 
  [mod_include DATE_LOCAL bug]
 
  Is this a known bug?
 
  It's probably this one:
  https://issues.apache.org/bugzilla/show_bug.cgi?id=39369

 I think that's right.  It's a test for Joe's fix for 39369, that has
 only been applied to trunk.  It would be nice to backport that fix so
 the stable release doesn't fail tests (or else do something with that
 test).

Here is a patch that works for 2.2.11. The mod_rerwite patch cures the 
failure in t/modules/rewrite.t:

  https://issues.apache.org/bugzilla/show_bug.cgi?id=46428

in 2.2.11.

The mod_info problem in my original mail was caused by my local setup 
and is rather a Apache::Test problem (if at all). I have 2 modperl 
versions installed, mod_perl-debug.so and mod_perl.so. That has 
confused the test.

Should I attach these patches to the problem reports in bugzilla or is 
that useless because they wont be backported officially?

Torsten

-- 
Need professional mod_perl support?
Just hire me: torsten.foert...@gmx.net
--- modules/mappers/mod_rewrite.c.xx	2009-04-01 11:28:01.0 +0200
+++ modules/mappers/mod_rewrite.c	2009-04-01 11:35:28.0 +0200
@@ -3869,7 +3869,20 @@
  * ourself).
  */
 if (p-flags  RULEFLAG_PROXY) {
-	/* PR#39746: Escaping things here gets repeated in mod_proxy */
+/* For rules evaluated in server context, the mod_proxy fixup
+ * hook can be relied upon to escape the URI as and when
+ * necessary, since it occurs later.  If in directory context,
+ * the ordering of the fixup hooks is forced such that
+ * mod_proxy comes first, so the URI must be escaped here
+ * instead.  See PR 39746, 46428, and other headaches. */
+if (ctx-perdir  (p-flags  RULEFLAG_NOESCAPE) == 0) {
+char *old_filename = r-filename;
+
+r-filename = ap_escape_uri(r-pool, r-filename);
+rewritelog((r, 2, ctx-perdir, escaped URI in per-dir context 
+for proxy, %s - %s, old_filename, r-filename));
+}
+
 fully_qualify_uri(r);
 
 rewritelog((r, 2, ctx-perdir, forcing proxy-throughput with %s,
--- modules/filters/mod_include.c.orig	2008-03-17 15:32:47.0 +0100
+++ modules/filters/mod_include.c	2009-04-01 14:45:41.0 +0200
@@ -580,7 +580,7 @@
 *p = '\0';
 }
 
-static void add_include_vars(request_rec *r, const char *timefmt)
+static void add_include_vars(request_rec *r)
 {
 apr_table_t *e = r-subprocess_env;
 char *t;
@@ -608,26 +608,17 @@
 }
 }
 
-static const char *add_include_vars_lazy(request_rec *r, const char *var)
+static const char *add_include_vars_lazy(request_rec *r, const char *var, const char *timefmt)
 {
 char *val;
 if (!strcasecmp(var, DATE_LOCAL)) {
-include_dir_config *conf =
-(include_dir_config *)ap_get_module_config(r-per_dir_config,
-   include_module);
-val = ap_ht_time(r-pool, r-request_time, conf-default_time_fmt, 0);
+val = ap_ht_time(r-pool, r-request_time, timefmt, 0);
 }
 else if (!strcasecmp(var, DATE_GMT)) {
-include_dir_config *conf =
-(include_dir_config *)ap_get_module_config(r-per_dir_config,
-   include_module);
-val = ap_ht_time(r-pool, r-request_time, conf-default_time_fmt, 1);
+val = ap_ht_time(r-pool, r-request_time, timefmt, 1);
 }
 else if (!strcasecmp(var, LAST_MODIFIED)) {
-include_dir_config *conf =
-(include_dir_config *)ap_get_module_config(r-per_dir_config,
-   include_module);
-val = ap_ht_time(r-pool, r-finfo.mtime, conf-default_time_fmt, 0);
+val = ap_ht_time(r-pool, r-finfo.mtime, timefmt, 0);
 }
 else if (!strcasecmp(var, USER_NAME)) {
 if (apr_uid_name_get(val, r-finfo.user, r-pool) != APR_SUCCESS) {
@@ -684,7 +675,7 @@
 val = apr_table_get(r-subprocess_env, var);
 
 if (val == LAZY_VALUE) {
-val = add_include_vars_lazy(r, var);
+val = add_include_vars_lazy(r, var, ctx-time_str);
 }
 }
 
@@ -2423,7 +2414,7 @@
 /* get value */
 val_text = elts[i].val;
 if (val_text == LAZY_VALUE) {
-val_text = add_include_vars_lazy(r, elts[i].key);
+val_text = add_include_vars_lazy(r, elts[i].key, ctx-time_str);
 }
 val_text = ap_escape_html(ctx-dpool, elts[i].val);
 v_len = strlen(val_text);
@@ -3608,7 +3599,7 @@
  * environment */
 ap_add_common_vars(r);
 ap_add_cgi_vars(r);
-add_include_vars(r, conf-default_time_fmt);
+add_include_vars(r);
 }
 /* Always unset the content-length.  There is no way to know if
  * the content will be modified at 

AP_FTYPE_PROTOCOL before AP_FTYPE_CONTENT_SET sometimes in 2.2.10

2009-04-01 Thread Kevac Marko
Hello.

ap_http_header_filter (AP_FTYPE_PROTOCOL) sometimes executed before my
AP_FTYPE_CONTENT_SET filter.

Any clue how that can happen?

As a result, my output header in not added.

2.2.10

-- 
Marko Kevac


Re: mod_include supporting POST subrequests

2009-04-01 Thread Torsten Foertsch
On Fri 20 Mar 2009, Graham Leggett wrote:
 Torsten Foertsch wrote:
  I need the include virtual directive to be able to issue POST
  requests. It should pass the request body to the subrequest. So I
  came up with the attached patch.
 
  It allows to write
 
    !--#include method=post virtual=... -- or
    !--#include method=inherit virtual=... --
 
[...]
 Something like this has already been added to trunk, take a look at
 the KEEP_BODY and KEPT_BODY filters in modules/filters/mod_request.c.

I did and, frankly, it is not the solution I was looking for. One has to 
define a max. body size to be kept. The body is kept in RAM which can 
be a problem unless KeptBodySize is rather small. So I developed my 
patch further.

It defers now the ap_discard_request_body call as much as possible. This 
gives output filters the chance to read the req body. If the client is 
expecting a 100 Continue message it is sent just before the first 
line of output.

Is there a chance for the patch to make it into 2.3++? If yes I'll merge 
it with the KEPT_BODY stuff.

Currently my httpd passes the current test framework with a few more 
patches that are not related to this one (see 2.2.11  mod_include 
thread) with one exception. Since the request body is read when output 
is potentially already on the wire a HTTP_REQUEST_ENTITY_TOO_LARGE 
error cannot be sent to the client if it sends the request body in 
chunked TE. The only sensible solution that I can think of would be to 
always send a 413 response if TE is chunked and a LimitRequestBody is 
active.

On Fri 20 Mar 2009, Nick Kew wrote:
 Erm ... that's ringing alarm bells.  The client, not the
 server, determines HTTP methods.  Or are you talking about
 proxied subrequests here?

I see it a bit different. Subrequests for included documents are made on 
behalf of the HTML programmer who wrote the frame. He decides to pass 
on the request body and he decides which method to use, IMHO. And yes, 
the problem comes from subrequests that are proxied to another server.

Torsten

-- 
Need professional mod_perl support?
Just hire me: torsten.foert...@gmx.net
--- modules/filters/mod_include.c.orig	2008-03-17 15:32:47.0 +0100
+++ modules/filters/mod_include.c	2009-03-25 14:49:14.0 +0100
@@ -1656,6 +1656,7 @@
apr_bucket_brigade *bb)
 {
 request_rec *r = f-r;
+enum {METHOD_GET, METHOD_POST, METHOD_INHERIT} method;
 
 if (!ctx-argc) {
 ap_log_rerror(APLOG_MARK,
@@ -1674,6 +1675,8 @@
 return APR_SUCCESS;
 }
 
+method=METHOD_GET;
+
 while (1) {
 char *tag = NULL;
 char *tag_val = NULL;
@@ -1686,6 +1689,29 @@
 break;
 }
 
+if (tag[0] == 'm'  !strcmp(tag, method)) {
+if ((tag_val[0] == 'g' || tag_val[0] == 'G')
+ !strcasecmp(tag_val, get)) {
+method=METHOD_GET;
+}
+else if ((tag_val[0] == 'p' || tag_val[0] == 'P')
+ !strcasecmp(tag_val, post)) {
+method=METHOD_POST;
+}
+else if ((tag_val[0] == 'i' || tag_val[0] == 'I')
+ !strcasecmp(tag_val, inherit)) {
+method=METHOD_INHERIT;
+}
+else {
+ap_log_rerror(APLOG_MARK, APLOG_ERR, 0, r, unknown value 
+  \%s\ to parameter \method\ of tag 
+  include in %s, tag_val, r-filename);
+SSI_CREATE_ERROR_BUCKET(ctx, f, bb);
+break;
+}
+continue;
+}
+
 if (strcmp(tag, virtual)  strcmp(tag, file)) {
 ap_log_rerror(APLOG_MARK, APLOG_ERR, 0, r, unknown parameter 
   \%s\ to tag include in %s, tag, r-filename);
@@ -1712,7 +1738,15 @@
 }
 }
 else {
-rr = ap_sub_req_lookup_uri(parsed_string, r, f-next);
+if (method == METHOD_GET
+|| (method == METHOD_INHERIT  strcmp(r-method, POST))) {
+rr = ap_sub_req_lookup_uri(parsed_string, r, f-next);
+}
+else {  /* POST */
+method = METHOD_POST;
+apr_table_setn(r-notes, subreq-pass-request-body, 1);
+rr = ap_sub_req_method_uri(POST, parsed_string, r, f-next);
+}
 }
 
 if (!error_fmt  rr-status != HTTP_OK) {
@@ -1734,10 +1768,22 @@
 ap_set_module_config(rr-request_config, include_module, r);
 }
 
+/* XXX: would be good to check for EOS on rr-input_filters
+ * if method==POST and issue a warning if so.
+ */
+
 if (!error_fmt  ap_run_sub_req(rr)) {
 error_fmt = unable to include \%s\ in parsed file %s;
 }
 
+/* method=POST must be specified *before* *each*
+ * virtual=...
+ */
+if (method != METHOD_GET) {
+method = METHOD_GET;
+ 

Re: mod_include supporting POST subrequests

2009-04-01 Thread Graham Leggett

Torsten Foertsch wrote:

I did and, frankly, it is not the solution I was looking for. One has to 
define a max. body size to be kept. The body is kept in RAM which can 
be a problem unless KeptBodySize is rather small. So I developed my 
patch further.


It defers now the ap_discard_request_body call as much as possible. This 
gives output filters the chance to read the req body. If the client is 
expecting a 100 Continue message it is sent just before the first 
line of output.


Is there a chance for the patch to make it into 2.3++? If yes I'll merge 
it with the KEPT_BODY stuff.


Having two separate mechanisms to solve the same problem is not ideal. 
In addition, creating a solution that only works in one place 
(mod_include), is less ideal still.


It should be relatively straightforward to amend the KEEP_BODY and 
KEPT_BODY filters so that, by default, the first attempt to read the 
body is passed through, and the second and subsequent attempts to read 
the body return an empty brigade.


This will give you the behaviour you are looking for, and it will work 
anywhere within the server, not just in mod_include.


Regards,
Graham
--


smime.p7s
Description: S/MIME Cryptographic Signature


Re: Adopting mod_remoteip to modules/metadata/ ?

2009-04-01 Thread William A. Rowe, Jr.

Graham Leggett wrote:

(Having not yet had a chance to look at the code) How is the possibility 
of multiple IPs in the same header handled, eg:


X-Fowarded-For: 10.2.3.4, 10.11.12.13


I think you'll find your question is answered in the README I referenced.
It's handled fine.  The interesting point is that, presuming that the
nearest remote_host and 10.11.12.13 both have 'Internal' trust, meaning
they are know to our network, while 67.151.55.1 and 178.21.1.10 are given
TrustedProxy status, we would still refuse to acknowledge another networks
private subnet.  Therefore;

X-Fowarded-For: 10.2.3.4, 67.151.55.1 178.21.1.10, 10.11.12.13

results in a remote_host 67.151.55.1, and the header value is updated to
reflect that this host still makes an X-Forwarded-For assertion, e.g.
X-Fowarded-For: 10.2.3.4

Will we someday introduce a feature to treat this as a decorated remote
host name of something like 67.151.55.1_10.2.3.4 - I'd suggest we could
but that feature could break any number of thirdparty modules attempting
to resolve this address.


RE: HTTP over SCTP

2009-04-01 Thread Preethi Natarajan
 

 -Original Message-
 From: Paul Querna [mailto:p...@querna.org] 
 Sent: Tuesday, March 31, 2009 4:54 PM
 To: dev@httpd.apache.org
 Cc: d...@apr.apache.org; Jonathan Leighton; Preethi Natarajan 
 (prenatar)
 Subject: Re: HTTP over SCTP
 
 
 Please post the patches, preferablly against trunk:
 http://httpd.apache.org/dev/patches.html
 
 Once we see the patches we will be in a much better position 
 to give feedback,
 

Paul,

Sometime ago, we were directed to do the same. You can find a report (and
patch) here -- https://issues.apache.org/bugzilla/show_bug.cgi?id=37202.

Breifly, the mods to httpd/APR are:
- new Listen directive to include the transport information (I believe this
piece created some discussion and we didn't arrive at any conclusion).
- changes to configure files to detect sendmsg and recvmsg in the system to
send/recv on specific SCTP streams. 

We would also like to explore the possibility of introducing new send and
recv APIs that can have the SCTP stream ID as another argument -- kind of
wrappers for sctp_sendmsg and sctp_recvmsg. 

Thanks,
Preethi




Re: 2.2.11 mod_include

2009-04-01 Thread Dan Poirier
Torsten Foertsch torsten.foert...@gmx.net writes:

 On Wed 01 Apr 2009, Dan Poirier wrote:
 Lars Eilebrecht l...@eilebrecht.net writes:
  Torsten Foertsch wrote:
 
  [mod_include DATE_LOCAL bug]
 
  Is this a known bug?
 
  It's probably this one:
  https://issues.apache.org/bugzilla/show_bug.cgi?id=39369

 I think that's right.  It's a test for Joe's fix for 39369, that has
 only been applied to trunk.  It would be nice to backport that fix so
 the stable release doesn't fail tests (or else do something with that
 test).

 Here is a patch that works for 2.2.11. The mod_rerwite patch cures the 
 failure in t/modules/rewrite.t:

   https://issues.apache.org/bugzilla/show_bug.cgi?id=46428

 in 2.2.11.
...
 Should I attach these patches to the problem reports in bugzilla or is 
 that useless because they wont be backported officially?

 Torsten

These are two separate problems that just happen to have been fixed
recently in trunk.  I haven't looked at the rewrite one.

I don't know any reason why these couldn't be backported, but someone
with commit privileges will have to propose them for backporting.

Dan



Re: mod_include supporting POST subrequests

2009-04-01 Thread Torsten Foertsch
On Wed 01 Apr 2009, Graham Leggett wrote:
  Is there a chance for the patch to make it into 2.3++? If yes I'll
  merge it with the KEPT_BODY stuff.

 Having two separate mechanisms to solve the same problem is not
 ideal. In addition, creating a solution that only works in one place
 (mod_include), is less ideal still.

 It should be relatively straightforward to amend the KEEP_BODY and
 KEPT_BODY filters so that, by default, the first attempt to read the
 body is passed through, and the second and subsequent attempts to
 read the body return an empty brigade.

 This will give you the behaviour you are looking for, and it will
 work anywhere within the server, not just in mod_include.

That is what I thought to do. If KeptBodySize is 0 the body is passed to 
the first (sub)request that is reading it. All subsequent subrequests 
will see an empty stream. If KeptBodySize is 0 the first (sub)request 
reads the whole body and the KEEP_BODY_FILTER saves as much as is 
configured. Subsequent subrequests are passed this kept body.

I also thought of writing the body to temporary files if configured. So 
it's possible to preserve larger body without much headache about 
memory consumption. Plus, I think it would be a nice feature to be able 
to

  !--#include method=post body=a=b;c=d virtual=... --

and perhaps also to include an encoding=multipart/form-data, would it 
not?

BTW, the current patch is not only for mod_include. It should work 
(although not tested) for other filters/handlers as well as long as the 
main request sets the subreq-pass-request-body note to prevent the 
header table to be overwritten for the subrequest. This more or less 
resembles what r-kept_body does in 2.3. Alternatively the caller could 
overwrite the header table after creating the subreq but before running 
it and set the original CL and TE headers.

But to restate my question, can I take your reply as yes, go ahead, it 
would be nice to have that feature in apache httpd?

Torsten

-- 
Need professional mod_perl support?
Just hire me: torsten.foert...@gmx.net


Re: SNI in 2.2.x (Re: Time for 2.2.10?)

2009-04-01 Thread Kaspar Brand
Plüm, Rüdiger, VF-Group wrote:
 A question regarding your patch:
 
 @@ -427,29 +435,26 @@ int ssl_hook_Access(request_rec *r)
   * function and not by OpenSSL internally (and our function is aware of
   * both the per-server and per-directory contexts). So we cannot ask
   * OpenSSL about the currently verify depth. Instead we remember it in 
 our
   * ap_ctx attached to the SSL* of OpenSSL.  We've to force the
   * renegotiation if the reconfigured/new verify depth is less than the
   * currently active/remembered verify depth (because this means more
   * restriction on the certificate chain).
   */
 -if ((sc-server-auth.verify_depth != UNSET) 
 -(dc-nVerifyDepth == UNSET)) {
 -/* apply per-vhost setting, if per-directory config is not set */
 -dc-nVerifyDepth = sc-server-auth.verify_depth;
 -}
 
 Why don't you stick with the old approach of updating dc-nVerifyDepth and 
 using
 this later on consistently

Because it was called ugly by Joe (and not threadsafe, possibly[?]):

http://mail-archives.apache.org/mod_mbox/httpd-dev/200806.mbox/%3c20080604140111.ga12...@redhat.com%3e

 (the same happens with other fields in the same way later on)?

I don't think any of my changes to ssl_hook_Access adds an assignment
to any dc-something parameter (or it would be an oversight/bug if it did).

Kaspar