Re: Piped logging, graceful restart, broken pipe

2017-08-18 Thread Ewald Dieterich

On 17.08.2017 19:21, Eric Covener wrote:

On Thu, Aug 17, 2017 at 3:09 AM, Ewald Dieterich <ew...@mailbox.org> wrote:

I configured CustomLog with a pipe:

CustomLog "|/usr/bin/logger -p local1.info -t apache2" combined

I get this message in the error log:

(32)Broken pipe: [...] AH00646: Error writing to |/usr/bin/logger -p
local1.info -t apache2

Is this something that can be fixed? Or am I doing something wrong here?


I think it's a design problem. The last time it was discussed was
"rotatelogs and SIGTERM?
Is there a reason why CustomLog doesn't support logging directly to 
syslog like ErrorLog does? Like this:


CustomLog syslog:local1

That would solve my problem (I just want all access log entries to go to 
syslog) and also the log rotation problem since Apache wouldn't need to 
worry about this anymore. You could configure log rotation completely 
outside the scope of Apache.


One limitation that ErrorLog mentions is that the syslog facility is 
effectively global, but I don't think that would break the feature.


Would do you think? Are there any other problems with this approach?


Piped logging, graceful restart, broken pipe

2017-08-17 Thread Ewald Dieterich

I configured CustomLog with a pipe:

CustomLog "|/usr/bin/logger -p local1.info -t apache2" combined

Now I start downloading a large file. During this download I gracefully 
restart Apache. The download continues of course. But when the download 
is finished I don't get an entry in the access log, instead I get this 
message in the error log:


(32)Broken pipe: [...] AH00646: Error writing to |/usr/bin/logger -p 
local1.info -t apache2


It looks like the graceful restart doesn't account for the 
/usr/bin/logger process still being required until the last worker 
finished. I.e. the graceful restart kills the running /usr/bin/logger 
immediately and starts a new one.


Is this something that can be fixed? Or am I doing something wrong here?


Re: Segfault in mod_xml2enc.c with big5 charset

2017-03-03 Thread Ewald Dieterich

On 05.12.2016 14:38, Ewald Dieterich wrote:

I have a segfault in mod_xml2enc.c, xml2enc_ffunc() when processing a
page with big5 charset.


I have another crash at exactly the same location, this time with 
charset "euc-kr". mod_xml2enc is definitely not able to handle 
multi-byte charsets reliably.


Segfault in mod_xml2enc.c with big5 charset

2016-12-05 Thread Ewald Dieterich
I have a segfault in mod_xml2enc.c, xml2enc_ffunc() when processing a 
page with big5 charset.


The crash happens in line 472 because ctx->convset is NULL:

rv = apr_xlate_conv_buffer(ctx->convset, buf+(bytes - insz),
   , ctx->buf, >bytes);

The sequence leading to this crash is:

* Call apr_xlate_conv_buffer(...). Return value is APR_INCOMPLETE (_not_ 
APR_EINCOMPLETE) (probably because the buffer ends in the middle of a 
multi-byte character).


* In "switch (rv)" enter the default case, set ctx->convset to NULL, and 
despite what the comment says ("Bail out, flush ...") don't bail out, 
instead continue with the loop.


* Call apr_xlate_conv_buffer(NULL, ...), crash with a segfault.

2 questions:

(1) Is APR_INCOMPLETE the same as APR_EINCOMPLETE when using the xlate 
API? Then the "case APR_EINCOMPLETE" should probably also handle "case 
APR_INCOMPLETE".


(2) What's the proper way to bail out from the default case? Just return 
or is there anything to consider regarding ctx->bbnext?


Thanks for your help.


Re: Multiple SessionCryptoPassphrase keys lead to segfault when decrypting session

2016-11-07 Thread Ewald Dieterich

On 04.11.2016 16:05, Ewald Dieterich wrote:

This leads to a segfault in mod_session.c,
session_identity_decode() because the tokenization assumes valid data
when in this case it's just binary rubbish (one of the apr_strtok()
calls segfaults).


BTW, the segfault in mod_session.c, session_identity_decode() happens 
when the session data (z->encoded) starts with this string: =&


Multiple SessionCryptoPassphrase keys lead to segfault when decrypting session

2016-11-04 Thread Ewald Dieterich
mod_session_crypto supports multiple SessionCryptoPassphrase keys. The 
idea is that when decrypting a session you try one key after the other 
until the decryption succeeds, assuming that the successful key is the 
key that was used when the session was encrypted.


But this assumption obviously doesn't hold. I have a case where a wrong 
(old) key successfully decrypts a session that was encrypted with a 
different (new) key. This leads to a segfault in mod_session.c, 
session_identity_decode() because the tokenization assumes valid data 
when in this case it's just binary rubbish (one of the apr_strtok() 
calls segfaults).


I "fixed" this by adding an additional sanity check to 
mod_session_crypto.c, decrypt_string(), see the attached patch. Of 
course this fix only works for cases where the seemingly successfully 
decrypted binary rubbish contains 0x00 somewhere in the decrypted data.


Any ideas for a proper fix?

Sorry that I can't provide you with the actual session data since it 
contains sensitive information (username and password).
--- mod_session_crypto.c.orig	2016-11-04 15:34:46.740015054 +0100
+++ mod_session_crypto.c	2016-11-04 15:36:02.407403627 +0100
@@ -321,6 +321,12 @@
 decryptedlen += tlen;
 decrypted[decryptedlen] = 0;
 
+if (strlen(decrypted) != decryptedlen) {
+ap_log_rerror(APLOG_MARK, APLOG_DEBUG, 0, r,
+  "decryption sanity check failed");
+continue;
+}
+
 break;
 }
 


Re: Random AH01842 errors in mod_session_crypto

2016-09-12 Thread Ewald Dieterich

On 06/13/2016 09:38 AM, Ewald Dieterich wrote:

I configured form authentication with mod_auth_form, mod_session_cookie
and mod_session_crypto in Apache 2.4.20 on Debian unstable and get
random AH01842 errors ("decrypt session failed, wrong passphrase"). The
passphrase was not changed when this happens.

It looks like the error occurs when the following conditions are met:

* mpm_worker enabled (never experienced the error with mpm_prefork)
* Same user doing multiple requests in parallel using the same session
(don't see the error when the user is doing only sequential requests)


Looks like the problem is this:

* In session_crypto_init() a crypto context is created from a global 
pool (server->pconf).
* In encrypt_string() and decrypt_string() a key is created from the 
context via apr_crypto_passphrase() using the global pool for allocating 
memory for the key.

* Multiple threads might access the global pool at the same time.
* APR documentation about pool thread-safety: "Note that most operations 
on pools are not thread-safe: a single pool should only be accessed by a 
single thread at any given time."


I changed mod_session_crypto to use the request pool instead and it 
seems that this fixed my problem.


I think this also fixes a memory consumption problem: Keys are only 
created, but never explicitly destroyed (or reused). So for every 
request memory is allocated from the global pool, but this memory is 
never freed during the lifetime of mod_session_crypto. Using the request 
pool fixes this problem because it is destroyed when the request is done.


See the attached patch session-crypto.patch that I created for 2.4.20.
--- a/modules/session/mod_session_crypto.c
+++ b/modules/session/mod_session_crypto.c
@@ -34,7 +34,7 @@
 
 #include "apr_crypto.h"/* for apr_*_crypt et al */
 
-#define CRYPTO_KEY "session_crypto_context"
+#define DRIVER_KEY "session_crypto_driver"
 
 module AP_MODULE_DECLARE_DATA session_crypto_module;
 
@@ -333,6 +333,35 @@
 
 }
 
+static int session_crypto_init_per_request(request_rec *r, const apr_crypto_t **ff)
+{
+apr_crypto_t *f = NULL;
+
+session_crypto_conf *conf = ap_get_module_config(r->server->module_config,
+_crypto_module);
+
+if (conf->library) {
+const apr_crypto_driver_t *driver = NULL;
+apr_pool_t *p = r->pool;
+apr_status_t rv;
+
+apr_pool_userdata_get((void **), DRIVER_KEY,
+  r->server->process->pconf);
+
+rv = apr_crypto_make(, driver, conf->params, p);
+if (APR_SUCCESS != rv) {
+ap_log_rerror(APLOG_MARK, APLOG_ERR, rv, r, APLOGNO(01848)
+  "The crypto context could not be initialised");
+return rv;
+}
+}
+
+*ff = f;
+
+return OK;
+}
+
+
 /**
  * Crypto encoding for the session.
  *
@@ -349,7 +378,13 @@
 _crypto_module);
 
 if (dconf->passphrases_set && z->encoded && *z->encoded) {
-apr_pool_userdata_get((void **), CRYPTO_KEY, r->server->process->pconf);
+res = session_crypto_init_per_request(r, );
+if (res != OK) {
+ap_log_rerror(APLOG_MARK, APLOG_DEBUG, res, r,
+  "session_crypto_encode: session_crypto_init_per_request failed");
+return res;
+}
+
 res = encrypt_string(r, f, dconf, z->encoded, );
 if (res != OK) {
 ap_log_rerror(APLOG_MARK, APLOG_DEBUG, res, r, APLOGNO(01841)
@@ -380,8 +415,13 @@
 _crypto_module);
 
 if ((dconf->passphrases_set) && z->encoded && *z->encoded) {
-apr_pool_userdata_get((void **), CRYPTO_KEY,
-r->server->process->pconf);
+res = session_crypto_init_per_request(r, );
+if (res != OK) {
+ap_log_rerror(APLOG_MARK, APLOG_DEBUG, res, r,
+  "session_crypto_decode: session_crypto_init_per_request failed");
+return res;
+}
+
 res = decrypt_string(r, f, dconf, z->encoded, );
 if (res != APR_SUCCESS) {
 ap_log_rerror(APLOG_MARK, APLOG_ERR, res, r, APLOGNO(01842)
@@ -402,7 +442,6 @@
 apr_pool_t *ptemp, server_rec *s)
 {
 const apr_crypto_driver_t *driver = NULL;
-apr_crypto_t *f = NULL;
 
 session_crypto_conf *conf = ap_get_module_config(s->module_config,
 _crypto_module);
@@ -451,19 +490,11 @@
 return rv;
 }
 
-rv = apr_crypto_make(, driver, conf->params, p);
-if (APR_SUCCESS != rv) {
-ap_log_error(APLOG_MARK, APLOG_ERR, rv, s, APLOGNO(01848)
-"The crypto library '%s' could not be initialised",
-conf->library);
-return rv;
-}
-
 ap_log_error(APLOG_M

Random AH01842 errors in mod_session_crypto

2016-06-13 Thread Ewald Dieterich
I configured form authentication with mod_auth_form, mod_session_cookie 
and mod_session_crypto in Apache 2.4.20 on Debian unstable and get 
random AH01842 errors ("decrypt session failed, wrong passphrase"). The 
passphrase was not changed when this happens.


It looks like the error occurs when the following conditions are met:

* mpm_worker enabled (never experienced the error with mpm_prefork)
* Same user doing multiple requests in parallel using the same session 
(don't see the error when the user is doing only sequential requests)


I already added some debug logging to check the passphrase and it's 
always the same for both encryption and decryption when the error occurs.


To reproduce the error I wrote a Perl script that logs in and then 
requests a protected page in an endless loop and start the script 
multiple times. It still can take quite some time for the error to 
occur, but it's the best I came up with for easy reproduction. In cases 
reported "from the field" with real users, real browsers and real Web 
applications the error occurs much more frequently.


Does anyone want to look into this? I can give more information about a 
test setup and the Perl script if that's the case. Any help would be 
really appreciated.


Buffer size in mod_session_crypto.c, decrypt_string()

2015-11-19 Thread Ewald Dieterich

This is from mod_session_crypto.c, decrypt_string():

/* strip base64 from the string */
decoded = apr_palloc(r->pool, apr_base64_decode_len(in));
decodedlen = apr_base64_decode(decoded, in);
decoded[decodedlen] = '\0';

Shouldn't that be ("+ 1" for the added '\0'):

   decoded = apr_palloc(r->pool, apr_base64_decode_len(in) + 1);

At least that's how it's done in eg. mod_auth_basic.c. Or can we make 
any assumptions about the number of characters that 
apr_base64_decode_len() returns?




Reverse proxy: invalid Content-Length leads to 413 + 400 errors mixed up

2015-01-08 Thread Ewald Dieterich

I set up a simple reverse proxy with Apache 2.4.10 on Debian unstable:

ProxyPass / http://backend/
ProxyPassreverse / http://backend/

When I send a request to the reverse proxy with an invalid 
Content-Length header, I get two 413 and 400 response bodies concatenated:


$ curl -i -H Content-Length: a http://frontend/
HTTP/1.1 413 Request Entity Too Large
Date: Thu, 08 Jan 2015 09:00:33 GMT
Server: Apache/2.4.10 (Debian)
Connection: close
Content-Type: text/html; charset=iso-8859-1

!DOCTYPE HTML PUBLIC -//IETF//DTD HTML 2.0//EN
htmlhead
title413 Request Entity Too Large/title
/headbody
h1Request Entity Too Large/h1
The requested resourcebr //br /
does not allow request data with GET requests, or the amount of data 
provided in

the request exceeds the capacity limit.
hr
addressApache/2.4.10 (Debian) Server at frontend Port 80/address
/body/html
!DOCTYPE HTML PUBLIC -//IETF//DTD HTML 2.0//EN
htmlhead
title400 Bad Request/title
/headbody
h1Bad Request/h1
pYour browser sent a request that this server could not understand.br /
/p
hr
addressApache/2.4.10 (Debian) Server at frontend Port 80/address
/body/html

Even though the HTTP status code in the response is 413, the access log 
shows a 400 error:


[...] GET / HTTP/1.1 400 904 - curl/7.26.0

I think that first a 413 error is created that later gets replaced 
partly by a 400 error.


Here are some log entries:

[...] AH01587: Invalid Content-Length

= That's where a 413 (HTTP_REQUEST_ENTITY_TOO_LARGE) is created.

[...] (-102)Unknown error -102: [client 10.128.128.95:46000] AH01095: 
prefetch request body failed to 10.8.19.114:80 (frontend) from 
10.128.128.95 ()


= That's were where a 400 (HTTP_BAD_REQUEST) is returned.

Any ideas how to fix this so that this situation is handled as a single 
error and not as two errors mixed up?


Re: Reverse proxy: invalid Content-Length leads to 413 + 400 errors mixed up

2015-01-08 Thread Ewald Dieterich

On 01/08/2015 01:39 PM, Eric Covener wrote:

On Thu, Jan 8, 2015 at 4:38 AM, Ewald Dieterich ew...@mailbox.org wrote:

Any ideas how to fix this so that this situation is handled as a single
error and not as two errors mixed up?


in mod_proxy.c you will see at least 1 stanza like this:

 status = ap_get_brigade(r-input_filters, temp_brigade,
 AP_MODE_READBYTES, APR_BLOCK_READ,
 MAX_MEM_SPOOL - bytes_read);
 if (status != APR_SUCCESS) {
 ap_log_rerror(APLOG_MARK, APLOG_ERR, status, r, APLOGNO(01095)
   prefetch request body failed to %pI (%s)
from %s (%s),
   p_conn-addr, p_conn-hostname ? p_conn-hostname: 
,
   c-client_ip, c-remote_host ? c-remote_host: );
 return HTTP_BAD_REQUEST;
 }

The proper pattern in 2.4.x and later is to not return an error like that:

 return ap_map_http_request_error(status, HTTP_BAD_REQUEST);

In the case of that -102 error, the -102 will be returned verbatim
instead (AP_FILTER_ERROR). Are you able to test and verify?


Hope I tested the right thing. ap_map_http_request_error() is not 
available in 2.4.x, so I added it from trunk and replaced the return 
statements in the stanzas above as suggested. I attached a patch with my 
changes to 2.4.10.


The response looks good now:

$ curl -i -H Content-Length: a http://frontend/
HTTP/1.1 413 Request Entity Too Large
Date: Thu, 08 Jan 2015 14:22:09 GMT
Server: Apache/2.4.10 (Debian)
Connection: close
Content-Type: text/html; charset=iso-8859-1

!DOCTYPE HTML PUBLIC -//IETF//DTD HTML 2.0//EN
htmlhead
title413 Request Entity Too Large/title
/headbody
h1Request Entity Too Large/h1
The requested resourcebr //br /
does not allow request data with GET requests, or the amount of data 
provided in

the request exceeds the capacity limit.
hr
addressApache/2.4.10 (Debian) Server at frontend Port 80/address
/body/html

But the access log entry is still wrong. Now a 200 is logged:

[...] GET / HTTP/1.1 200 590 - curl/7.26.0

I still see the -102 error:

[...] (-102)Unknown error -102: [client 10.128.128.95:46766] AH01095: 
prefetch request body failed to 10.8.19.114:80 (backend) from 
10.128.128.95 ()


I guess there are more changes in trunk that I would need to add?
--- a/modules/proxy/mod_proxy_http.c
+++ b/modules/proxy/mod_proxy_http.c
@@ -324,7 +324,7 @@
from %s (%s), p_conn-addr,
   p_conn-hostname ? p_conn-hostname: ,
   c-client_ip, c-remote_host ? c-remote_host: );
-return HTTP_BAD_REQUEST;
+return ap_map_http_request_error(status, HTTP_BAD_REQUEST);
 }
 }
 
@@ -475,7 +475,7 @@
from %s (%s), p_conn-addr,
   p_conn-hostname ? p_conn-hostname: ,
   c-client_ip, c-remote_host ? c-remote_host: );
-return HTTP_BAD_REQUEST;
+return ap_map_http_request_error(status, HTTP_BAD_REQUEST);
 }
 }
 
@@ -624,7 +624,7 @@
from %s (%s), p_conn-addr,
   p_conn-hostname ? p_conn-hostname: ,
   c-client_ip, c-remote_host ? c-remote_host: );
-return HTTP_BAD_REQUEST;
+return ap_map_http_request_error(status, HTTP_BAD_REQUEST);
 }
 }
 
@@ -807,7 +807,7 @@
from %s (%s),
   p_conn-addr, p_conn-hostname ? p_conn-hostname: ,
   c-client_ip, c-remote_host ? c-remote_host: );
-return HTTP_BAD_REQUEST;
+return ap_map_http_request_error(status, HTTP_BAD_REQUEST);
 }
 
 apr_brigade_length(temp_brigade, 1, bytes);
--- a/include/http_protocol.h
+++ b/include/http_protocol.h
@@ -502,6 +502,23 @@
  */
 AP_DECLARE(long) ap_get_client_block(request_rec *r, char *buffer, apr_size_t bufsiz);
 
+/*
+ * Map specific APR codes returned by the filter stack to HTTP error
+ * codes, or the default status code provided. Use it as follows:
+ *
+ * return ap_map_http_request_error(rv, HTTP_BAD_REQUEST);
+ *
+ * If the filter has already handled the error, AP_FILTER_ERROR will
+ * be returned, which is cleanly passed through.
+ *
+ * These mappings imply that the filter stack is reading from the
+ * downstream client, the proxy will map these codes differently.
+ * @param rv APR status code
+ * @param status Default HTTP code should the APR code not be recognised
+ * @return Mapped HTTP status code
+ */
+AP_DECLARE(int) ap_map_http_request_error(apr_status_t rv, int status);
+
 /**
  * In HTTP/1.1, any method can have a body.  However, most GET handlers
  * wouldn't know what to do with a request body if they received one.
--- a/modules/http/http_filters.c
+++ b/modules/http/http_filters.c
@@ -1416,6 +1416,42

Re: Reverse proxy: invalid Content-Length leads to 413 + 400 errors mixed up

2015-01-08 Thread Ewald Dieterich

On 01/08/2015 04:15 PM, Yann Ylavic wrote:

Can you test this (attached) patch please (without yours applied)?


Or with yours and just changing return
ap_map_http_request_error(status, HTTP_BAD_REQUEST); by return
(status == AP_FILTER_ERROR) ? DONE : ap_map_http_request_error(status,
HTTP_BAD_REQUEST);


Looks good. I tested both your patch and my modified one and now error 
response and access log entry are both OK.


xml2enc_html_entity_fixups() consuming all memory

2014-11-05 Thread Ewald Dieterich
I'm running xml2enc in a reverse proxy setup (Apache httpd 2.4.4, but 
2.4.10 shows the same behavior). For a large response that the backend 
sends,  xml2enc_html_entity_fixups() is called with *bytesp == 4007511. 
The repeated call of apr_pstrcat() in the while loop leads to the 
consumption of all available memory. Apache then either aborts itself or 
gets killed by the Linux OOM killer.


The only fix that I can think of is to manage the memory myself, see my 
patch below. Is there a better way to fix this?


--- a/modules/filters/mod_xml2enc.c
+++ b/modules/filters/mod_xml2enc.c
@@ -610,10 +610,25 @@ static int xml2enc_html_entity_fixups(ap
 bytes_processed += inlen;
 assert((outlen = 0)  (outlen  
XML2ENC_HTML_ENTITY_FIXUPS_WORKBUF_LENGTH));

 workbuf[outlen] = 0; // add terminating zero byte
-result_buf = result_buf ? apr_pstrcat(f-r-pool, result_buf, 
workbuf, NULL)

-: apr_pstrdup(f-r-pool, workbuf);
+
+if (result_buf == NULL) {
+result_buf = ap_malloc(outlen + 1);
+strcpy(result_buf, workbuf);
+}
+else {
+result_buf = ap_realloc(result_buf, result_size + outlen + 1);
+strcat(result_buf, workbuf);
+}
+
 result_size += outlen;
 }
+
+if (result_buf) {
+const char *old_result_buf = result_buf;
+result_buf = apr_pstrdup(f-r-pool, old_result_buf);
+free(old_result_buf);
+}
+
 *bufp = result_buf;
 *bytesp = result_size;
 return OK;


Re: xml2enc_html_entity_fixups() consuming all memory

2014-11-05 Thread Ewald Dieterich

On 11/05/2014 03:41 PM, Eric Covener wrote:

On Wed, Nov 5, 2014 at 9:28 AM, Ewald Dieterich ew...@mailbox.org wrote:

I'm running xml2enc in a reverse proxy setup (Apache httpd 2.4.4, but 2.4.10
shows the same behavior).


Are you running vanilla sources? I could not find this code and
XML2ENC_HTML_ENTITY_FIXUPS_WORKBUF_LENGTH only finds a SUSE patch.


You are right, this is a patched Apache. I have to check where and why 
we pull in this change. Thanks for your help and sorry for the confusion.


mod_proxy: ProxyPass, Location and regex check

2014-03-03 Thread Ewald Dieterich

I try to get

1  Location /?*[]/
2  ProxyPass http://backend/?*[]/;
3  ProxyPassReverse http://backend/?*[]/;
4  /Location

to work and get the error message

AH00526: Syntax error on line 2 of ...:
Regular expression could not be compiled.

I guess that /?*[]/ is a valid location, right? So it should work, I 
guess.


In mod_proxy.c, add_pass(), there is this snippet to detect a regex in 
the location (f is cmd-path, which is the location):


if (cmd-path) {
[...]
if (apr_fnmatch_test(f)) {
use_regex = 1;
}
}

And later:

if (use_regex) {
new-regex = ap_pregcomp(cmd-pool, f, AP_REG_EXTENDED);
if (new-regex == NULL)
return Regular expression could not be compiled.;
[...]
}

I think the reason for this is to handle a regex in a location correctly 
(either LocationMatch ... or Location ~ ...).


Why is apr_fnmatch_test() used to recognize a regex? It only checks for 
*, ? and [] pairs (the comment of apr_fnmatch_test() is misleading: 
Determine if the given pattern is a regular expression.).


Is there a better way to check the location for a regex? I don't think 
so because I can configure a valid non-regex location that looks exactly 
like a regex. Maybe expand struct cmd_parms_struct by a flag that marks 
path as a regex? Then you wouldn't have to guess.


Thoughts? Maybe I'm understanding something wrong?


Re: Revisiting: xml2enc, mod_proxy_html and content compression

2014-02-13 Thread Ewald Dieterich

On 02/11/2014 06:03 PM, Nick Kew wrote:


On 6 Feb 2014, at 09:40, Ewald Dieterich wrote:

My wishlist:

* Make the configuration option as powerful as the compiled in fallback so that you can 
configure eg. contains xml. But how would you do that? Support regular 
expressions?


Nice thought.  Perhaps the expression parser would be the ideal solution?


A good idea even if it could be a challenge to configure because the 
expression parser covers so much.



* Provide a configuration option to blacklist content types so that you can use the 
defaults that are compiled in but exclude specific types from processing (this is how I 
work around the Sharepoint problem, I simply exclude content type 
multipart/related).


Perhaps combined with the expression parser as a 'magic' clause that
expands to the default?


Yes, that would be very convenient.


Re: Revisiting: xml2enc, mod_proxy_html and content compression

2014-02-06 Thread Ewald Dieterich

Thanks for the patch!

On 02/05/2014 02:57 PM, Nick Kew wrote:


The hesitation is because I've been wanting to review the
patch before committing, and round tuits are in woefully
short supply.  So I'm attaching it here.  I'll take any feedback
from you or other users as a substitute for my own review,
and commit if it works for you without glitches.


Minor glitch: the patch doesn't compile because it uses the unknown 
variable cfg in xml2enc_ffunc(). Otherwise it works as advertised.


My wishlist:

* Make the configuration option as powerful as the compiled in fallback 
so that you can configure eg. contains xml. But how would you do that? 
Support regular expressions?


* Provide a configuration option to blacklist content types so that you 
can use the defaults that are compiled in but exclude specific types 
from processing (this is how I work around the Sharepoint problem, I 
simply exclude content type multipart/related).


Re: Revisiting: xml2enc, mod_proxy_html and content compression

2014-01-20 Thread Ewald Dieterich

On 12/17/2013 12:47 PM, Nick Kew wrote:


On 17 Dec 2013, at 10:32, Thomas Eckert wrote:


I've been over this with Nick before: mod_proxy_html uses mod_xml2enc to do the detection 
magic but mod_xml2enc fails to detect compressed content correctly. Hence a simple 
ProxyHTMLEnable fails when content compression is in place.


Aha!  Revisiting that, I see I still have an uncommitted patch to make
content types to process configurable.  I think that was an issue you
originally raised?  But compression is another issue.


I don't think you committed the patch to make content types 
configurable. Would you mind to share that patch? I have problems with a 
SharePoint 2013 server that sends a response with a multipart/related 
content type and I need to exclude that content type from processing:


Content-Type: multipart/related;
  type=application/xop+xml;
  boundary=urn:uuid:96f4525c-3b5b-4abf-ab09-7cfc8d346216;
  start=6cccf4ee-894d-49c4-909e-c6b13dc42...@tempuri.org;
  start-Info=text/xml; charset=utf-8

Ewald


Re: Reverse proxy, mod_security, segmentation fault

2013-12-12 Thread Ewald Dieterich

On 12/12/2013 11:53 AM, Rainer Jung wrote:

On 12.12.2013 10:16, Ewald Dieterich wrote:

On a Debian unstable installation (Apache 2.4.6, apr 1.4.8, apr-util
1.5.3, mod_security 2.7.5) I enabled mpm_worker and configured a simple
reverse proxy. When I enable mod_security and then send large amounts of
  POST requests to a misconfigured backend server that just drops the
requests, I get segmentation faults.


Could it be

https://issues.apache.org/bugzilla/show_bug.cgi?id=50335

See the patch discussion starting at comment #28.

The currently committed trunk patches are

http://svn.apache.org/viewvc?view=revisionrevision=1534321

and

http://svn.apache.org/viewvc?view=revisionrevision=1550061

Those fixes might not yet be a complete solution to the problem, but
might be easy to backport to 2.4 to check whether they fix your problem.


The patches fix my problem, no more segmentation faults.