Re: expose SSL_SHARED_CIPHERs from SSL/TLS

2023-03-06 Thread Dirk-Willem van Gulik



> On 6 Mar 2023, at 13:32, Ruediger Pluem  wrote:
> 
> 
> 
> On 3/6/23 12:35 PM, Dirk-Willem van Gulik wrote:
>> I was cleaning up some of our private code - and came across the patch below 
>> - exposing the SHARED_CHIPHERs.
>> 
>> We scratch this itch in a few places to help force (or prevent) the forcing 
>> of a protocol upgrade from application land.
>> 
>> No idea how common that is - any reason not to submit this as a suggestion 
>> for some future httpd version ?
> 
> If you provide some documentation for the var, go for it :-)

Draft against trunk below. As far as I could see mod_ssl.xml was the most 
sensible place to document this. 

Updated the SSL_CIPHER a little to clarify the relation between the two.

Dw

Index: docs/manual/mod/mod_ssl.xml
===
--- docs/manual/mod/mod_ssl.xml (revision 1908122)
+++ docs/manual/mod/mod_ssl.xml (working copy)
@@ -66,7 +66,8 @@
 SSL_SESSION_IDstring
The hex-encoded SSL session id
 SSL_SESSION_RESUMED   string
Initial or Resumed SSL Session.  Note: multiple requests may be served over 
the same (Initial or Resumed) SSL session if HTTP KeepAlive is in use
 SSL_SECURE_RENEG  string
true if secure renegotiation is supported, else 
false
-SSL_CIPHERstring
The cipher specification name
+SSL_SHARED_CIPHERSstring
Colon separated list of shared chiper (i.e. possible chipers that are 
present on both server and with the client))
+SSL_CIPHERstring
The name of the selected cipher
 SSL_CIPHER_EXPORT string
true if cipher is an export cipher
 SSL_CIPHER_USEKEYSIZE number
Number of cipher bits (actually used)
 SSL_CIPHER_ALGKEYSIZE number
Number of cipher bits (possible)
Index: modules/ssl/ssl_engine_kernel.c
===
--- modules/ssl/ssl_engine_kernel.c (revision 1908122)
+++ modules/ssl/ssl_engine_kernel.c (working copy)
@@ -1532,6 +1532,7 @@
 "SSL_SERVER_A_SIG",
 "SSL_SESSION_ID",
 "SSL_SESSION_RESUMED",
+"SSL_SHARED_CIPHERS",
 #ifdef HAVE_SRP
 "SSL_SRP_USER",
 "SSL_SRP_USERINFO",
Index: modules/ssl/ssl_engine_vars.c
===
--- modules/ssl/ssl_engine_vars.c (revision 1908122)
+++ modules/ssl/ssl_engine_vars.c (working copy)
@@ -506,6 +506,11 @@
 else if (ssl != NULL && strcEQ(var, "COMPRESS_METHOD")) {
 result = ssl_var_lookup_ssl_compress_meth(ssl);
 }
+else if (ssl != NULL && strcEQ(var, "SHARED_CIPHERS")) {
+char buf[ 1024 * 16 ];
+if (SSL_get_shared_ciphers(ssl,buf,sizeof(buf)))
+   result = apr_pstrdup(p,buf);
+}
 #ifdef HAVE_TLSEXT
 else if (ssl != NULL && strcEQ(var, "TLS_SNI")) {
 result = apr_pstrdup(p, SSL_get_servername(ssl,




expose SSL_SHARED_CIPHERs from SSL/TLS

2023-03-06 Thread Dirk-Willem van Gulik
I was cleaning up some of our private code - and came across the patch below - 
exposing the SHARED_CHIPHERs.

We scratch this itch in a few places to help force (or prevent) the forcing of 
a protocol upgrade from application land.

No idea how common that is - any reason not to submit this as a suggestion for 
some future httpd version ?

Dw


Index: modules/ssl/ssl_engine_vars.c
===
--- modules/ssl/ssl_engine_vars.c   (revision 620141)
+++ modules/ssl/ssl_engine_vars.c   (working copy)
@@ -320,6 +320,11 @@
 else if (ssl != NULL && strcEQ(var, "COMPRESS_METHOD")) {
 result = ssl_var_lookup_ssl_compress_meth(ssl);
 }
+else if (ssl != NULL && strcEQ(var, "SHARED_CIPHERS")) {
+   char buf[ 1024 * 16 ];
+   if (SSL_get_shared_ciphers(ssl,buf,sizeof(buf)))  +   
result = apr_pstrdup(p,buf);
+}
 #ifndef OPENSSL_NO_TLSEXT
 else if (ssl != NULL && strcEQ(var, "TLS_SNI")) {
 result = apr_pstrdup(p, SSL_get_servername(ssl,
Index: modules/ssl/ssl_engine_kernel.c
===
--- modules/ssl/ssl_engine_kernel.c (revision 620141)
+++ modules/ssl/ssl_engine_kernel.c (working copy)
@@ -1067,6 +1067,7 @@
 "SSL_SERVER_A_KEY",
 "SSL_SERVER_A_SIG",
 "SSL_SESSION_ID",
+"SSL_SHARED_CIPHERS",
 NULL
 };


and config
SSLSessionCache None
SSLSessionCacheTimeout  1
...
EOM




Extra bucket brigade with just an EOS on an input filter at the end.

2021-08-07 Thread Dirk-Willem van Gulik
In some code 
(https://source.redwax.eu/svn/redwax/rs/mod_cms_verify/trunk/mod_cms_verify.c) 
I have in input filter (that checks a PKCS#7 signature before passing the 
payload on to a proxy/cgi-script, etc).

I am testing this with:

echo  "field1=foo=bar” |\
openssl cms -sign -signer /tmp/sign.cert -outform DER -stream  
|\
 curl --data-binary @- - http://127.0.0.1:8080/show.cgi

Works well.

But I am seeing after all this going well an extra bucket brigade being passed; 
with 0 bytes. And I’d like to understand why.

Code is roughly 
((https://source.redwax.eu/svn/redwax/rs/mod_cms_verify/trunk/mod_cms_verify.c 
for the real McCoy):

  static apr_status_t _input_filter(ap_filter_t * f,  apr_bucket_brigade * 
bbout, ….
  {
verify_config_rec *conf = ap_get_module_config(r->per_dir_config, 
_verify_module);
request_rec *r = f->r;

bb = apr_brigade_create(r->pool, r->connection->bucket_alloc);

if (state == NULL)  {
setup some state..
state->pbb_tmp = apr_brigade_create(r->pool, c->bucket_alloc);
….
}

if (APR_BRIGADE_EMPTY(state->pbb_tmp)) {
rv = ap_get_brigade(f->next, state->pbb_tmp, eMode, eBlock, nBytes);
if (eMode == AP_MODE_EATCRLF || rv != APR_SUCCESS)
return rv;
}

while (!APR_BRIGADE_EMPTY(state->pbb_tmp)) {
apr_bucket *pbkt_in = APR_BRIGADE_FIRST(state->pbb_tmp);
const char *data;
apr_size_t len;

if (APR_BUCKET_IS_EOS(pbkt_in)) {
apr_bucket *pbkt_out = validate()..

if (pbkt_out is valid) 
APR_BRIGADE_INSERT_TAIL(bbout, pbkt_out);

APR_BRIGADE_INSERT_TAIL(bbout, 
apr_bucket_eos_create(r->connection->bucket_alloc));
APR_BUCKET_REMOVE(pbkt_in);
break;
}

rv = apr_bucket_read(pbkt_in, , , eBlock);
if (rv != APR_SUCCESS)
return rv;

… add len bytes to a buffer

apr_bucket_delete(pbkt_in);
};
return APR_SUCCESS;
   }

And mostly taken from mod_example.

What I am seeing is a first brigade with the POST content; with a terminating 
EOF. The bbout data makes it to the CGI script or (reverse) proxy. 

But I am then getting a second _input_filter call with a second brigade of just 
an EOS packet.

What causes that ? Or am I not running through the brigade properly ?

Dw



Re: child exit on self-proxy

2021-07-15 Thread Dirk-Willem van Gulik


On 14 Jul 2021, at 23:47, Yann Ylavic  wrote:
> On Wed, Jul 14, 2021 at 3:42 PM Yann Ylavic  wrote:
>> 
>> On Tue, Jul 13, 2021 at 4:16 PM Stefan Eissing
>>  wrote:
>>> 
>>> In Yann's v3 of the patch, it triggers only when I stop the server while 
>>> the test case is ongoing,
>> 
>> OK thanks, I have a new v6 which should address this and also still
>> call pre_close hooks in any case.
>> The patch is a bit "verbose" (some cosmetics/comments changes that
>> helped my workflow, sorry about that) but in the end I think it's
>> simpler to maintain (as Eric asked?) which may be worth it..
> 
> I have staged this a bit in https://github.com/apache/httpd/pull/208

Lovely - well done - this fixes things on our end.

Dw

Re: svn commit: r1875516 - /httpd/httpd/branches/2.4.x/STATUS

2020-03-22 Thread Dirk-Willem van Gulik
Thank you so much for finding is! 

This one has been plaguing us for quite a bit - and we where totally looking in 
the wrong place.

Dw.

> On 22 Mar 2020, at 13:12, gbec...@apache.org wrote:
> 
> Author: gbechis
> Date: Sun Mar 22 12:12:21 2020
> New Revision: 1875516
> 
> URL: http://svn.apache.org/viewvc?rev=1875516=rev
> Log:
> propose mod_ssl fix [skip ci]
> 
> Modified:
>httpd/httpd/branches/2.4.x/STATUS
> 
> Modified: httpd/httpd/branches/2.4.x/STATUS
> URL: 
> http://svn.apache.org/viewvc/httpd/httpd/branches/2.4.x/STATUS?rev=1875516=1875515=1875516=diff
> ==
> --- httpd/httpd/branches/2.4.x/STATUS (original)
> +++ httpd/httpd/branches/2.4.x/STATUS Sun Mar 22 12:12:21 2020
> @@ -156,6 +156,13 @@ PATCHES PROPOSED TO BACKPORT FROM TRUNK:
>  months/years without any action item :)
>  ylavic: will look at it ASAP..
> 
> +  *) mod_ssl: Fix memory leak of OCSP stapling response.
> +  The OCSP_RESPONSE is either ignored or serialized 
> (i2d_OCSP_RESPONSE) in the
> +  TLS response/handshake extension, so it must be freed.
> + trunk patch: http://svn.apache.org/r1874577
> + 2.4.x patch: svn merge -c 1874577 ^/httpd/httpd/trunk .
> + +1: gbechis
> +
> PATCHES/ISSUES THAT ARE BEING WORKED
>   [ New entries should be added at the START of the list ]
> 
> 
> 



Re: Netcraft

2020-02-21 Thread Dirk-Willem van Gulik
Jim,

On 21 Feb 2020, at 13:20, Jim Jagielski  wrote:

> Wow. Was Netcraft actually somewhat kind to Apache httpd? They actually 
> admitted some areas where httpd is doing better, and still does better, 
> market-share wise, than nginx.


They are very kind (and professional) people. And the pub in Bath is great (and 
it is one of the most lovely cities in England too) - I am sure they would not 
mind showing you around.

Dw.



Re: "Most Popular Web Server?"

2018-04-19 Thread Dirk-Willem van Gulik
On 19 Apr 2018, at 12:09, Graham Leggett  wrote:
> On 18 Apr 2018, at 10:46 PM, Mark Blackman  wrote:
> 
>> Is most popular the right thing to aim for? I would advise continuing to 
>> trade on Apache’s current strengths (versatility and documentation for me 
>> and relative stability) and let the chips fall where they may. It’s an open 
>> source project with a massive first-mover advantage and no investors to 
>> please. Just do the right thing, stay visible and the rest will sort itself 
>> out.
> 
> I agree strongly with this. I took a look at nginx and gave it a fair 
> evaluation, then I discovered this:
> ..
> "Anything else may possibly cause unpredictable behaviour, including 
> potential SIGSEGV.”
> 
> Both this document and the idea that SIGSEGV would remain unfixed would never 
> fly at Apache. Nginx suffers the problem in that product managers have to 
> trade off the pressure of new features for the marketing people over the need 
> to fix problems they already have. This isn’t sustainable for them.
..
> The strength of httpd is that it is a tank - it just keeps going and going. 
> You can deploy it and completely forget about it, it just works. This frees 
> up our users to focus their attention on doing whatever it is they want to do.

Large crude oil tankers and formula 1 racing cars are both things that can go 
from A to B. Yet they have different qualities. 

Perhaps we need to emphasise this a bit more - that there is room for different 
things in this market. 

I’ve found the same in production - nginx can be wonderfully fast in certain 
settings - but can also be a relatively fragile and finicky beast best ran in 
serious loadbalancing/failover.

Dw.

Re: BalancerMember and RFC1035 compliance - BalancerMember worker hostname (65-character.long.DNS.name.com) too long

2018-02-07 Thread Dirk-Willem van Gulik

> On 7 Feb 2018, at 16:24, Graham Leggett  wrote:
> 
> On 07 Feb 2018, at 5:18 PM, Graham Leggett  wrote:
> 
>> Looking back through the archives, looks like that backport was already 
>> accepted:
>> 
>> http://svn.apache.org/viewvc?view=revision=1634520
> 
> Hmmm… it’s actually only solved the URL too long problem, the hostname too 
> long problem is still a fatal error:
> 
> ptr = apr_uri_unparse(p, , APR_URI_UNP_REVEALPASSWORD);
> if (PROXY_STRNCPY(wshared->name, ptr) != APR_SUCCESS) {
> ap_log_error(APLOG_MARK, APLOG_ERR, 0, ap_server_conf, APLOGNO(02808)
> "Alert! worker name (%s) too long; truncated to: %s", ptr, 
> wshared->name
> );
> }
> if (PROXY_STRNCPY(wshared->scheme, uri.scheme) != APR_SUCCESS) {
> return apr_psprintf(p, "worker scheme (%s) too long", uri.scheme);
> }
> if (PROXY_STRNCPY(wshared->hostname, uri.hostname) != APR_SUCCESS) {
> return apr_psprintf(p, "worker hostname (%s) too long", uri.hostname);
> }
> 
> Would this break if we did the same warning on hostname that’s done on the 
> full URL?

Not sure how this broke on your end - but the cases where I had it break on me 
in production where all cases where things were generated and dynamically 
registered with some sort of ``service-zone-status-etc-’’ thing:


casemo-apiservice-backend-frankfurt-production-W0091-2a01-4f80-0201--32e6---0002

so chopping of the end is a bit painful.

Dw.


signature.asc
Description: Message signed with OpenPGP


Re: New ServerUID directive

2018-02-02 Thread Dirk-Willem van Gulik

> On 2 Feb 2018, at 15:44, Stefan Eissing  wrote:
> 
> 
> 
>> Am 02.02.2018 um 15:42 schrieb Yann Ylavic :
>> 
>> On Fri, Feb 2, 2018 at 3:25 PM, Plüm, Rüdiger, Vodafone Group
>>  wrote:>
>>> 
 -Ursprüngliche Nachricht- Von: Jim Jagielski
 [mailto:j...@jagunet.com] Gesendet: Freitag, 2. Februar 2018 15:15
 An: httpd  Betreff: Re: New ServerUID
 directive
 
 Why? If it is designed to not change between restarts then there
 are much easier ways to be unique, which it kinda already is,
 considering the actual structs being used.
>> 
>> I don't know what "easier ways" you are thinking about, the one
>> proposed here is not that complicated IMO.
>> In any case the method we are currently using in mod_proxy_lb *is*
>> changing accross restarts, mainly because of the line number.
>> What if you add/remove a line before the ?
>> At least for graceful restarts, I think it shall not change, SHMs
>> should be reused.
> 
> Is it a hash across the config record of a server what would give
> the desired behaviour?

We have several internal modules which sort of need this. In each case we 
generate a  sha256 and a UUID based on the FQDN or IP address and the (first) 
port number it listens on. 

And we then log this with ‘info’ during startup; thus allowing an admin to at 
some later point add a ServerUUID to the list of ServerName, ServerAdmin, etc. 
as he or she sees fit.

As in practice - we found that in the few times when you need to change that 
UUID - you are in effect doing that as part of switching to a new origin server 
or something like that at protocol level with respect to things like locks, 
caches and other short-transactions/connection RESTy things surviving stuff.

That said - having this in the core and being able to routinely seed this extra 
into things like cookies is goodness. We do that a lot now - and it helps with 
simplifying compliance a lot.

Dw.

Re: SSL and Usability and Safety

2017-05-03 Thread Dirk-Willem van Gulik

> On 3 May 2017, at 15:14, Issac Goldstand <mar...@beamartyr.net> wrote:
> 
> On 5/3/2017 3:59 PM, Dirk-Willem van Gulik wrote:
>> 
>>> On 3 May 2017, at 14:53, Issac Goldstand <mar...@beamartyr.net
>>> <mailto:mar...@beamartyr.net>> wrote:
>>> 
>>> On 5/3/2017 12:46 PM, Dirk-Willem van Gulik wrote:
>>>> On 3 May 2017, at 10:03, Issac Goldstand <mar...@beamartyr.net
>>>> <mailto:mar...@beamartyr.net>> wrote:
>>>>> 
>>>>> +1 on the idea
>>>>> 
>>>>> So far I'm -0 about all of the proposed implementations for 2 reasons:
>>>>> 
>>>>> 1) Mr and Mrs normal (whom are our primary customers in the original
>>>>> proposal) usually download Apache from their distro or some other
>>>>> binary.  Their Apache sources are usually not up-to-date, and in the
>>>>> scenario that a new vulnerability is found it would take ages to
>>>>> propagate to them, anyway
>>>>> 
>>>>> 2) For those users who are comfortable building their own source, they
>>>> ….
>>>> 
>>>> So how about ‘us’ taking the lead here. 
>>> 
>>> That's the part I was always +1 on :)
>>> 
>>>> We, here, simply define ‘one’ setting as the industry best standard -
>>>> which roughly corresponds to what ssllabs their test would get you an
>>>> A+ and that pretty much meets or exceeds the various NIST et.al.
>>>> recommendations for key lengths for the next 5 years. 
>>> 
>>> The problem is, that we don't know what's going to be good going forward
>>> five years, and IMHO the best standards are shaped at least equally by
>>> removing "negatives" often because of high-risk vulnerabilities, as by
>>> adding new "positives" due to new available ciphers/tools
>> 
>> Right - so I think there are two things
>> 
>> 1)the general advice of NIST et.al. - summarized nicely at:
>> 
>> https://www.keylength.com
>> 
>> which tells us what our `best acceptable’ choises are in any release
>> going out and their likely vailidy for the next 5 years.
>> 
>> And then there is our response to things that become known; such as
>> vulnerability - for which we have a proven update proces that is
>> dovetailed by us sending mail to announce, the security mailing lists
>> and similar outreach.
> 
> Which, IMHO, we can safely expect Mr and Mrs Normal to never see.  Mr
> and Mrs normal aren't on the mailing list of most pieces of software,
> even if they use them.
> 
> If we truly want to cater to them (and by doing so, do our part in
> advocating a safer and more secure internet) then I maintain that we'd
> need to do better.

Right - but then we are in the land of automatic updates; binary package 
fetching and what not ? Or at the very least - pulling down a file over the 
internet from ‘us’ that is sufficiently protected yet robust, etc ?

That is a whole new land?

Dw.

Re: SSL and Usability and Safety

2017-05-03 Thread Dirk-Willem van Gulik

> On 3 May 2017, at 14:53, Issac Goldstand <mar...@beamartyr.net> wrote:
> 
> On 5/3/2017 12:46 PM, Dirk-Willem van Gulik wrote:
>> On 3 May 2017, at 10:03, Issac Goldstand <mar...@beamartyr.net> wrote:
>>> 
>>> +1 on the idea
>>> 
>>> So far I'm -0 about all of the proposed implementations for 2 reasons:
>>> 
>>> 1) Mr and Mrs normal (whom are our primary customers in the original
>>> proposal) usually download Apache from their distro or some other
>>> binary.  Their Apache sources are usually not up-to-date, and in the
>>> scenario that a new vulnerability is found it would take ages to
>>> propagate to them, anyway
>>> 
>>> 2) For those users who are comfortable building their own source, they
>> ….
>> 
>> So how about ‘us’ taking the lead here. 
> 
> That's the part I was always +1 on :)
> 
>> We, here, simply define ‘one’ setting as the industry best standard - which 
>> roughly corresponds to what ssllabs their test would get you an A+ and that 
>> pretty much meets or exceeds the various NIST et.al. recommendations for key 
>> lengths for the next 5 years. 
> 
> The problem is, that we don't know what's going to be good going forward
> five years, and IMHO the best standards are shaped at least equally by
> removing "negatives" often because of high-risk vulnerabilities, as by
> adding new "positives" due to new available ciphers/tools

Right - so I think there are two things

1)  the general advice of NIST et.al. - summarized nicely at:

https://www.keylength.com

which tells us what our `best acceptable’ choises are in any release going out 
and their likely vailidy for the next 5 years.

And then there is our response to things that become known; such as 
vulnerability - for which we have a proven update proces that is dovetailed by 
us sending mail to announce, the security mailing lists and similar outreach.

>> We’d wrap this into a simple policy document. Promise ourselfves that we’d 
>> check this every release and at least once a year review it. And have a 
>> small list of the versions currently meeting or exceeding our policy.
>> 
>> And this is the setting you get when you do ‘SSLEngine On’.
> 
> To me this doesn't address the time lags between shipping it in source,
> and it getting implemented on the machine.  I don't see it as superior
> in any way to, say, putting the recommended settings in the online docs
> and updating that from time to time - moreso when that update is in
> response to a new vulnerability.

I think we should make sure that 1) any release that goes out meets or exceeds 
industry practice (e.g. the BSI 2017 recommendation up to 2022); and 2) that we 
keep a list of versions that are still `current’ enough; and that accomodates 
what we since their release learned. Typically meaning - update to version X.Y.

Dw

Re: SSL Policy Definitions

2017-05-03 Thread Dirk-Willem van Gulik

> On 3 May 2017, at 14:09, Graham Leggett  wrote:
> 
> On 03 May 2017, at 2:01 PM, Stefan Eissing  
> wrote:
> 
>> We seem to all agree that a definition in code alone will not be good 
>> enough. People need to be able to see what is actually in effect.
> 
> I think we’re overthinking this.
> 
> We only need to document the settings that SSLSecurityLevel has clearly in 
> our docs, and make sure that "httpd -L” prints out the exact details so no 
> user need ever get confused.
> 
>> If we let users define their own classes, it could look like this:
> 
> Immediately we’ve jumped into functionality that is beyond Mr/Mrs Normal.

Agreed. If our default is simply ‘industry best practice’ (i.e. what we say it 
is*) — then Normal will be the new black.

And everyone else is still in the same boat - i.e. having to specify it just 
like they do today.

All that requires it to make the defaults sane.

Dw.

*: exceed NIST and https://www.keylength.com/  for 
5+ years, PFS, A or better at SSLLabs. 
https://github.com/ssllabs/research/wiki/SSL-and-TLS-Deployment-Best-Practices 




signature.asc
Description: Message signed with OpenPGP


Re: SSL and Usability and Safety

2017-05-03 Thread Dirk-Willem van Gulik
On 3 May 2017, at 10:03, Issac Goldstand  wrote:
> 
> +1 on the idea
> 
> So far I'm -0 about all of the proposed implementations for 2 reasons:
> 
> 1) Mr and Mrs normal (whom are our primary customers in the original
> proposal) usually download Apache from their distro or some other
> binary.  Their Apache sources are usually not up-to-date, and in the
> scenario that a new vulnerability is found it would take ages to
> propagate to them, anyway
> 
> 2) For those users who are comfortable building their own source, they
 ….

So how about ‘us’ taking the lead here. 

We, here, simply define ‘one’ setting as the industry best standard - which 
roughly corresponds to what ssllabs their test would get you an A+ and that 
pretty much meets or exceeds the various NIST et.al. recommendations for key 
lengths for the next 5 years. 

We’d wrap this into a simple policy document. Promise ourselfves that we’d 
check this every release and at least once a year review it. And have a small 
list of the versions currently meeting or exceeding our policy.

And this is the setting you get when you do ‘SSLEngine On’.

Everything else stays as is.

Dw



Re: SHA-256

2017-02-24 Thread Dirk-Willem van Gulik
On 24 Feb 2017, at 18:52, Jim Jagielski  wrote:
> 
> I think we should start, in addition to "signing" w/ md5 and sha-1,
> using sha-256 as well.
> 
> Sound OK?

That seems to match the advice of NIST, E-CRYPT and the BSI on


https://www.nrc.nl/nieuws/2017/02/24/zelfrijdende-auto-google-uber-stal-onze-robotauto-6964363-a1547533
 


none of which seems that eager to push us to 385 or 512 for the next 4 years. 

Though if we are updating the scripts - perhaps add sha-512 - just to 
‘socialize’ it early.

Dw.



Re: rfc7231 - content-md5

2017-01-20 Thread Dirk-Willem van Gulik

> On 20 Jan 2017, at 21:13, William A Rowe Jr <wr...@rowe-clan.net> wrote:
> 
> On Fri, Jan 20, 2017 at 1:49 PM, William A Rowe Jr <wr...@rowe-clan.net> 
> wrote:
>> On Fri, Jan 20, 2017 at 1:21 PM, Dirk-Willem van Gulik
>> <di...@webweaving.org> wrote:
>>> RFC 7231 has retired Content-MD5.
>>> 
>>> Fair game to remove it from -trunk - or make it squeek 'debrecated' at WARN 
>>> or INFO and retire it at the next minor release ?
> 
> +1
> 
>> Removing what, precisely? Content-MD5 headers aren't implemented in trunk.
> 
> Sorry, I missed a -i arg to grep :)
> 
Ah ok - thanks - I was wondering I had been in Amsterdam for too long (which I 
had - and it involved shops where they do sell coffee).

> Yes, the default_handler behavior in trunk/server/core.c can simply be 
> removed,
> along with the core.c ContentDigest directive handling. I think it
> should also be
> removed from modules/cache/mod_cache.c as it is not a valid entity header.
> 
> The many unset Content-MD5 actions must remain IMO to guard against our
> sharing this upstream or downstream.
> 
> The http_core.h core flag content_md5 and values AP_CONTENT_MD5_*
> should probably remain until 3.0 to avoid unnecessary API changes. a doxygen
> /** @deprecated Unused flag */ against that struct member will help us mop 
> these
> up during any 3.0 review.

Dw.



Re: rfc7231 - content-md5

2017-01-20 Thread Dirk-Willem van Gulik
On 20 Jan 2017, at 20:49, William A Rowe Jr  wrote:
> 
> Note that https://www.rfc-editor.org/rfc/rfc7616.txt still provides
> for MD5 hashed
> digest auth keys. So removing this altogether will break mod_auth_digest in a
> manner that breaks existing user auth.
> 

> 
> Note that https://www.rfc-editor.org/rfc/rfc7616.txt still provides
> for MD5 hashed
> digest auth keys. So removing this altogether will break mod_auth_digest in a
> manner that breaks existing user auth.

Right - and these need to stay. These are for interoperability reasons - and 
only affect that.

I think I am getting somewhere - currently going to a handful of packages that 
use ARP
and splitting things into:

apr_digest_64()
apr_digest_128()
apr_digest_256()
apr_digest_512()

for places where the is no cryptographic need and

apr_crypto_digest  ---

with the actual name of a cryptographic algorithm like SHA256, etc. Either 
because
it has a cryptographic need -or- because of interoperability -or- both.

And that seems to yield fairly clean results - which ultimately are conductive 
to
'fips' style flags to 'force' ancient algorithms, like MD5, to be not in 
critical places;
while letting harmless continue.

Dw



Re: rfc7231 - content-md5

2017-01-20 Thread Dirk-Willem van Gulik

> On 20 Jan 2017, at 20:49, William A Rowe Jr <wr...@rowe-clan.net> wrote:
> 
> On Fri, Jan 20, 2017 at 1:21 PM, Dirk-Willem van Gulik
> <di...@webweaving.org> wrote:
>> RFC 7231 has retired Content-MD5.
>> 
>> Fair game to remove it from -trunk - or make it squeek 'debrecated' at WARN 
>> or INFO and retire it at the next minor release ?
> 
> Removing what, precisely? Content-MD5 headers aren't implemented in trunk.

That is odd. I have in 

Repository Root: http://svn.apache.org/repos/asf
Repository UUID: 13f79535-47bb-0310-9956-ffa450edef68
Revision: 1779019

Quite a few:

> ./modules/cache/mod_cache.c:"Content-MD5",
> ./modules/filters/mod_brotli.c:apr_table_unset(r->headers_out, 
> "Content-MD5");
> ./modules/filters/mod_deflate.c:apr_table_unset(r->headers_out, 
> "Content-MD5");
> ./modules/filters/mod_deflate.c:
> apr_table_unset(r->headers_in, "Content-MD5");
> ./modules/filters/mod_deflate.c:apr_table_unset(r->headers_out, 
> "Content-MD5");
> ./modules/filters/mod_filter.c:
> apr_table_unset(r->headers_out, "Content-MD5");
> ./modules/lua/mod_lua.c:apr_table_unset(r->headers_out, 
> "Content-MD5");
> ./server/core.c:apr_table_setn(r->headers_out, "Content-MD5",
> ./server/protocol.c:apr_table_unset(rnew->headers_in, "Content-MD5");

Did I fuck up my repo in an epic way?

Dw

rfc7231 - content-md5

2017-01-20 Thread Dirk-Willem van Gulik
RFC 7231 has retired Content-MD5.

Fair game to remove it from -trunk - or make it squeek 'debrecated' at WARN or 
INFO and retire it at the next minor release ?

Dw.

Re: mod_lets-encrypt

2017-01-15 Thread Dirk-Willem van Gulik

> On 14 Jan 2017, at 22:31, William A Rowe Jr <wr...@rowe-clan.net> wrote:
> 
> On Sat, Jan 14, 2017 at 12:15 PM, Dirk-Willem van Gulik
> <di...@webweaving.org> wrote:
>> 
>> On 14 Jan 2017, at 19:05, William A Rowe Jr <wr...@rowe-clan.net> wrote:
>> 
>> Any mod_letsencrypt can provision the certs but needs to do so
>> while still root, before servicing requests (although there could be
>> some bounce-step where the MPM begins satisfying requests,
>> including the verification request necessary for letsencrypt.) We
>> certainly don't want to parse any web response whatsoever while
>> running as root.
>> 
>> Some of this will be needed - we need to be root to bind to port 80 — as the
>> protocol (in my reading) seems to demand it (now would be a good time to
>> petition the draft to change this for a random higher port).
>> 
>> In fact - that may be a nice feature - an, essential, empheral port.
> 
> My thinking was more along the lines that we fork off a process, much like
> mod_cgid or mod_ftp does to handle low numbered ports. What is actually
> handed across the domain socket or equivalent is structured simply enough
> to not be harmed by an errant child process, or if needed by the server
> earlier, then forked with the lesser run-as-httpd user permissions to speak
> some predictable and strictly constructed message back up to the root parent
> process.
> 
> It might be nice to our users if the cert/key isn't in the keystore,
> that the server
> starts anyways with a dummy cert, until it finally resolves to the
> letsencrypt CA
> server and completes that handshake.

Technically there is no need from this from a lets-encrypt perspective. Just 
port 80 for a few seconds.

> The server can respond to that child
> process modifying the keystore and exiting by initiating a graceful restart to
> begin serving traffic with the newly obtained cert/key material.

So the process of registering afresh; sending the tokens and getting the certs 
takes seconds at the very most.

So I’d be quite willing to allow a server to not start up an stay in a very 
simple mode prior to this.

An option may be to suid/chroot and what not down - and just hand the child a 
filedescriptor
to which it has write only for just the key - after which it dies.

But fair to assume we can hold of starting the server; it is pretty common to 
see 2-3 seconds
of cycle time due to DNS and what not already.

And anyone who understands/cares can do a dead normal setup using SSL 
thing-a-me’s.

or am I getting the scope wrong ?

> Somewhere in this equation lies accessor functions to mod_ssl to allow us
> access to keystores. Which would be a very useful facility, well beyond just
> letsencrypt. Whether those cert/key pairs live in ldap or some other place.
> Imagine a small cluster, say 4 instances of httpd, where only one performs
> the letsencrypt dance and the others search a memcache or other remote
> resource for the resulting cert/key materials (locked down well, we would
> hope.)

Agreed - but think that this is an ortogonal isue - and have sofar found that
with things like chipcards and HSM - the openssl engines are doing splendidly.

Dw.

Re: mod_lets-encrypt

2017-01-15 Thread Dirk-Willem van Gulik

> On 14 Jan 2017, at 20:05, Stefan Sperling <s...@stsp.name> wrote:
> 
> On Sat, Jan 14, 2017 at 07:15:29PM +0100, Dirk-Willem van Gulik wrote:
>> In fact - that may be a nice feature - an, essential, empheral port.
> 
> Would that work for web servers behind firewalls?

No; would not expect so. Am willing to declare such more or less out of scope 
for a simple apache module — i.e. if you are in an environment that is that 
‘pro’ that it can do decent firewalls and what not - then running a script like 
dehydrated on cron every 2 weeks to keep your lets-encrypt certs up to date 
should not exactly be beyond you.

But am willing to be convinced that i got the ‘sweet’ spot for this wrong. I am 
amazed by the level of handholding/details people are willing to 
forego/eskew/need in the world of docker containers.

Dw.



Re: mod_lets-encrypt

2017-01-14 Thread Dirk-Willem van Gulik

> On 14 Jan 2017, at 19:05, William A Rowe Jr  wrote:
> 
> On Sat, Jan 14, 2017 at 10:22 AM, Eric Covener  wrote:
>> On Sat, Jan 14, 2017 at 11:19 AM, Eric Covener  wrote:
>>> 
>>> I think if a feature/directive will turn on something that will write
>>> to configured keystores, it really shouldn't do or dictate much else.
>> 
>> Poorly phrased, but I think obtaining a cert should be separate from
>> things like further SSL configuration.
> 
> I think Dirk is suggesting that the core mod_ssl continues to exist, with
> sane defaults that require next to no specific directives other than to
> perhaps set the https protocol on port 443, and (I vote optionally) have
> a one line toggle for rewriting all port 80 requests to 443.
> 
> Note that h2 requests will continue to be honored on either port 80
> or 443, so this has to be crafted somewhat carefully.
> 
> I'm 100% in support of ensuring that mod_ssl runs with the most
> sensible choices in the most minimal config.
> 
> Any mod_letsencrypt can provision the certs but needs to do so
> while still root, before servicing requests (although there could be
> some bounce-step where the MPM begins satisfying requests,
> including the verification request necessary for letsencrypt.) We
> certainly don't want to parse any web response whatsoever while
> running as root.

Some of this will be needed - we need to be root to bind to port 80 — as the 
protocol (in my reading) seems to demand it (now would be a good time to 
petition the draft to change this for a random higher port).

In fact - that may be a nice feature - an, essential, empheral port.

And we will need to be able to respond to an HTTP request to a well known URL 
with the public key/token — and post that have some fork/pid be root enough to 
write a few things to safe places.

> I do believe the proposal should require a one line directive to
> enable this, particularly for the compiled-in static many modules
> build of httpd. It's shouldn't be simply a matter of loading some
> mod_letsencrypt without also some 'LetsEncrypt on" directive
> in the ssl vhost config.

The alternative is bundling a small shell script, like  stripped down
‘dehydrate’:

https://github.com/lukas2511/dehydrated/blob/master/dehydrated 


as a tool. And augment it with examples. But then you are back to square one.

Dw.

Re: mod_lets-encrypt

2017-01-14 Thread Dirk-Willem van Gulik
(reshuffled top post)

On 14 Jan 2017, at 16:07, Rich Bowen <rbo...@rcbowen.com> wrote:
> On Jan 10, 2017 12:15 PM, "Jacob Champion" <champio...@gmail.com 
> <mailto:champio...@gmail.com>> wrote:
> On 01/10/2017 08:35 AM, Dirk-Willem van Gulik wrote:
> Before I send someone into the woods - did anyone consider/do a quick
> ‘mod_lets_encrypt’ (with or without a persistent store) — that
> requires virtually no configuration ?
> 
> Considered? Yes. Back in August there was some discussion on this list with 
> Josh Aas. I don't know what the current status is.
..
> https://lists.apache.org/thread.html/ea902ae8e453b3a8d36345318fc74a54880d8bf14fed24e665c4b833@%3Cdev.httpd.apache.org%3E
>  
> <https://lists.apache.org/thread.html/ea902ae8e453b3a8d36345318fc74a54880d8bf14fed24e665c4b833@%3Cdev.httpd.apache.org%3E>
...
> https://lists.apache.org/thread.html/27d1fce7d30d9e31e2472045c260e4f8dcefd300a731ff9e435a5d4a@%3Cdev.httpd.apache.org%3E
>  
> <https://lists.apache.org/thread.html/27d1fce7d30d9e31e2472045c260e4f8dcefd300a731ff9e435a5d4a@%3Cdev.httpd.apache.org%3E>
> I talked with him at linuxcon, but there's been no followup. I for one would 
> love to see this happen.

So it seems we currently pull in enough sundry into mod_ssl to barely need 
something as ‘big’ as:

https://github.com/kristapsdz/acme-client 

if we assume the HTTP method (only) — which makes sense if we assume that this 
module is first and foremost about zero configuration and convenience. 

By extend that also means we ‘trust’ ourselves and the user we start up as 
(prior to any fork, suid or chrooting).

So I think we can narrow this down a lot (I just had to put something like this 
into a simple XMPP system) - as anyone who wants to do something more complex 
can use the many scripts available or use ‘real’ CA request processes.

So we are talking about the configs which are as simple as

# Defaults to /var/httpd/acme-private or something like we do for 
Mutexes.
#
ACMEDir /var/db/httpd/acme-private


ACME automatic
…

Where this implies SSLEnable, a set of sane/best-practice. ‘A+’, set of 
baseline SSL directives w.r.t. OSCP stapling  and so on. And absolutely no 
further SSL statements in your vhost. And it implies that port 80 forwards to 
https. Perhaps disallow any port/listen stuff ?

If you need SSL statements — then you are out of scope ? Is that fair ?

Which reduces the footprint to:

-   fetching the ACME-CA-URLs on first run ever; on install —or— having 
these as part of the distribution.

=> Fair to assume we ship with the ACME CA url ?

-   pin the server.

-   reading -or- generating an RSA key & 0600 storing it in what we 
consider a safe enough place (prior to any chroot)ing. 

=> Fair to assume key rollover is out of scope for this key (we could 
even do this empheral - and regenerate on renewal) ?

=> Fair to assume we just use one for all domains ?

-   Register this on first run. 

=>  Fair to assume things like contact email and EULA ok are out of 
scope ?

-   Build up a cert for each vhost; with all aliases added too as altnames. 

- for each vhost (altname) — get a challenge from the ACME server.

=>  Fair to assume we do not care about account recovery ?

-   bring up that/all vhost on their port 80  (all vhosts that have 
LetsEncrypt set to auto/on).

=> so if I understand the spec right - only port 80

==>  thus fair to assume non-root startup is out of 
scope - as we need to bind to a port < 1024.

So therefore:
=> fair to configure this ‘easier’ — i.e. if you do 
LetsEnrypt on on a domain SSL auto switches on and port 80 becomes an automatic 
redirect to https ?

-   Respond to the well know URI with each of he challenges.

-   Fetch/poll for the cert; save with 0600 and tight UID and be done.

-   Gracefully fail for each domain after N seconds - blocking https and 
with a sorry no ACME on http.

=>  do we need to allow for an ErrorDocument customisation of the 
latter ?

-   Set up a sane virtual SSL set of directives; and then chroot/suid/fork 
and start your normal run.

-   Switch of http or make http a blind to https forwarder.

-   Can we assume that the server has a graceful restart every week ? 

=> And thus declare ACME renewal from within out of scope  ?

Or do we need to auto disable SSL and go to http ?

Does that make sense - or have I oversimplified it and lost the plot ?

Dw







mod_lets-encrypt

2017-01-10 Thread Dirk-Willem van Gulik
Before I send someone into the woods - did anyone consider/do a quick 
‘mod_lets_encrypt’ (with or without a persistent store) — that requires 
virtually no configuration ?

Or is the web world still thinking unix with clear small concise scripts that 
do one thing well ?

Dw

Re: EC to audit Apache HTTP Server

2016-07-24 Thread Dirk-Willem van Gulik
On 22 Jul 2016, at 18:59, Steffen  wrote:
> 
> See https://joinup.ec.europa.eu/node/153614

I think I’ve gotten the right people at the EC to know that d Eric Covener 
(cove...@apache.org) as the Chair of HTTPD is the right starting place should 
they want a proper conversation.

Dw (who once upon a time worked at that EC department).

Re: svn commit: r1750779 - in /httpd/httpd/trunk: CHANGES modules/ssl/ssl_engine_kernel.c

2016-06-30 Thread Dirk-Willem van Gulik
That is my reading as well - +1.

> On 30 Jun 2016, at 18:38, Jim Jagielski  wrote:
> 
> +1
> 
>> On Jun 30, 2016, at 11:38 AM, Stefan Eissing  
>> wrote:
>> 
>> We now set exactly the same callback right before in line 709. If we had 
>> more than one callback, we would not have to specify NULL, but restore any 
>> previous callback there was, right?
>> 
>> But that is all just theoretical, as You and Yann already stated.
>> 
>>> Am 30.06.2016 um 17:26 schrieb Yann Ylavic :
>>> 
>>> On Thu, Jun 30, 2016 at 5:05 PM, Ruediger Pluem  wrote:
 
 Is there a reson why we use ssl_callback_SSLVerify instead of NULL like we 
 do in asimilar situation below?
 IMHO we do not want to change the callback here to whatever it may set.
 I agree that in practice there won't be any difference right now, since we 
 only have one callback.
>>> 
>>> I agree that if/when we have multiple callback possibilities, we
>>> should set NULL here, but also above where we force the new mode.
>>> 
>>> Nothing to worry about for now, though.
>>> 
>>> Regards,
>>> Yann.
>> 
> 
> 



Odd SSLProxyCheckPeerCN behaviour

2016-02-17 Thread Dirk-Willem van Gulik
Was looking into a report that something could get a lift on websocket with a 
specific AltSubject trickery; but got into jak shaving - where I cannot work 
out why SSLProxyCheckPeerCN et.al. get ignored. The most trivial config I could 
find to reproduce is:

Listen  123.123.123.123:4321


ServerName test-websock-bypass.webweaving.org   
LogLevel Debug

SSLProxyEngine On

SSLProxyCheckPeerCN off # Not using SSLProxyCheckPeerName off
SSLProxyCheckPeerExpire off
SSLProxyVerify off


SSLProxyCACertificateFile …./proxy.pem

ProxyPass / https://127.0.0.1:1234/
ProxyPassReverse / https://127.0.0.1:1234/


This getting tested with

beeb:~ dirkx$ curl - https://123.123.123.123:4321/

giving us a 

500 Proxy Error

The proxy server could not handle the request GET/.
Reason: Error during SSL Handshake with remote 
server 

However the log gives me:

[Wed Feb 17 12:18:53.167903 2016] [ssl:debug] [pid 48233] 
ssl_engine_kernel.c(1560): [remote 192.168.0.5:6045] AH02275: Certificate 
Verification, depth 0, CRL checking mode: none [subject: 
emailAddress=root@host.unknown,CN=host.unknown,OU=root / issuer: 
emailAddress=root@host.unknown,CN=host.unknown,OU=root / serial: 2481 / 
notbefore: Feb 10 10:51:12 2016 GMT / notafter: Feb  7 10:51:12 2026 GMT]
[Wed Feb 17 12:18:53.169458 2016] [ssl:debug] [pid 48233] 
ssl_engine_kernel.c(2018): [remote 192.168.0.5:6045] AH02041: Protocol: 
TLSv1.2, Cipher: ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)
[Wed Feb 17 12:18:53.169508 2016] [ssl:debug] [pid 48233] ssl_util_ssl.c(443): 
AH02412: [weser.webweaving.org:443] Cert does not match for name '192.168.0.5' 
[subject: emailAddress=root@host.unknown,CN=host.unknown,OU=root / issuer: 
emailAddress=root@host.unknown,CN=host.unknown,OU=root / serial: 2481 / 
notbefore: Feb 10 10:51:12 2016 GMT / notafter: Feb  7 10:51:12 2026 GMT]
[Wed Feb 17 12:18:53.169524 2016] [ssl:info] [pid 48233] [remote 
192.168.0.5:6045] AH02411: SSL Proxy: Peer certificate does not match for 
hostname 192.168.0.5
[Wed Feb 17 12:18:53.169541 2016] [ssl:info] [pid 48233] [remote 
192.168.0.5:6045] AH01998: Connection closed to child 0 with abortive shutdown 
(server weser.webweavin

Now AFAIKS - AH02412 is fair game - that is in modssl_X509_match_name() and the 
name does indeed not match.

But I fail to understand the error on AH02411 — it is in ssl_engine_io.c

  if ((sc->proxy_ssl_check_peer_name != SSL_ENABLED_FALSE) &&
hostname_note) {
apr_table_unset(c->notes, "proxy-request-hostname");
if (!cert
|| modssl_X509_match_name(c->pool, cert, hostname_note,
  TRUE, server) == FALSE) {
proxy_ssl_check_peer_ok = FALSE;
ap_log_cerror(APLOG_MARK, APLOG_INFO, 0, c, APLOGNO(02411)
  "SSL Proxy: Peer certificate does not match "
  "for hostname %s", hostname_note);
}
}
else if ((sc->proxy_ssl_check_peer_cn != SSL_ENABLED_FALSE) &&
hostname_note) {
const char *hostname;
int match = 0;

So I am now wondering about this logic in case of no alternative subject. And 
if superseding it was good enough - or if it should be totally removed. OR if 
this check needs to become an either/or check if there is no subject 
alternative).

Dw

Re: Buffer size in mod_session_crypto.c, decrypt_string()

2015-11-19 Thread Dirk-Willem van Gulik

> On 19 Nov 2015, at 10:07, Ewald Dieterich  wrote:
> 
> This is from mod_session_crypto.c, decrypt_string():
> 
>/* strip base64 from the string */
>decoded = apr_palloc(r->pool, apr_base64_decode_len(in));
>decodedlen = apr_base64_decode(decoded, in);
>decoded[decodedlen] = '\0';
> 
> Shouldn't that be ("+ 1" for the added '\0'):
> 
>   decoded = apr_palloc(r->pool, apr_base64_decode_len(in) + 1);
> 
> At least that's how it's done in eg. mod_auth_basic.c. Or can we make any 
> assumptions about the number of characters that apr_base64_decode_len() 
> returns?
> 

Hmm - very strong feeling of a deja vu:  
https://bz.apache.org/bugzilla/show_bug.cgi?id=43448 
  from some 10 years ago 
(it blew something out of the water back then).

And pretty sure we fixed the documentation. _len() returns the max size of the 
buffer (i.e. + 1). 

And _decode the exact length of the string without the trailing \0 which it 
silently does add. And which may be shorter than _len.

So above should be fine — but the 

>decoded[decodedlen] = '\0’;

seems both confusing and superfluous. Alternatively one could use 
_decode_binary()  in which case above terminator is absolutely needed. There is 
no _binary_len().

Dw






Re: Linking sqlite in to apache module

2015-07-13 Thread Dirk-Willem van Gulik
On 13 Jul 2015, at 10:21, Prakash Premkumar prakash.p...@gmail.com wrote:
 I'm trying to call sqlite function from my apache module.
 
 The source code for the module can be found here:
 http://pastebin.com/zkbTf03J
..
 When I navigate to localhost/ : I get It Works! message
 
 But when I naviage to any other URL like localhost/asd
 
 I get  the following response
 
 No data received
 ERR_EMPTY_RESPONSE

You may want to set a 

 ap_set_content_type(r, text/html);

before ap_rput()ing data. Secondly I woud sprinkle a few lines like:

ap_log_perror(file, line, APLOG_MODULE_INDEX, APLOG_NOTICE, 0, 
r-pool, “Past %s:%d, __FILE__, __LINE__);

in your code - to see where it gets in the hook handler. Tail the error log to 
see what is going on.

Dw.

Good at assembler ? (Was:httpd - side channel attack - timing of digest comparisons)

2015-05-29 Thread Dirk-Willem van Gulik
 On 28 May 2015, at 17:03, William A Rowe Jr wr...@rowe-clan.net 
 mailto:wr...@rowe-clan.net wrote:
….
   On 26 May 2015, at 17:22, Dirk-Willem van Gulik di...@webweaving.org 
   mailto:di...@webweaving.org wrote:
  ..
   So I think that what is needed are two (or three) functions
  ...
   - A string comparison function; where at least one string is is 
   under control of the attacker.
 
Thanks for all the feedback. With a bit of help from Martijn Boekhorst and Andy 
Armstrong I got as far as:

https://gist.github.com/dirkx/37c29dc5a82b6deb0bf0 
https://gist.github.com/dirkx/37c29dc5a82b6deb0bf0

which I am now testing against a wide range of settings  compilers (in essence 
running a side channel attack on it and see how far I get with the various 
variations).

However I could use some help  manual inspection ?

So if you have the time  can read assembler well - can you compile this at a 
reasonable optimizer setting and look at the assembler to confirm that key 
elements are not somehow optimized away; i.e. the innner loop is running in 
constant time. I am fairly sure about what happens on ARM and x386 — but modern 
x86 is largely voodoo to me.

Secondly - when we get to the end of the shorter string; we can either keep 
comparing to the last char or \0; or we go ‘modulo’ to the start of the string. 
Now modulo is perhaps not ideal; and seems to affect the pipeline on the XEON 
cpu (something I confess not to quite understand; and I cannot see/replicate on 
ARM).

So I would also love to have feedback on the two strategies in the code w.r.t. 
to that - and not getting hit by pipeline length; L3 caches or odd things like 
page boundaries.

Above GIST has the full code - the simpliefied version below (which does not 
have the 1024 min length).

Dw.



// MODULO version

AP_DECLARE(int) ap_timingsafe_strcmp(const char * hostile, const char * 
toProtect) {
const unsigned char *p1 = (const unsigned char *)hostile;
const unsigned char *p2 = (const unsigned char *) toProtect;

size_t i1 = 1 ,i2 = 1;
unsigned int d1 = 1, d2 = 1, res = 0;
do {
res |= (p1[i % i1] - p2[i % i2]);
 
d1 = !!p1[i % i1];
d2 = !!p2[i % i2];

i1 += d1;
i2 += d2;
i++; 
} while (d1); // we reveal the length of the hostile string.

res |= (i1 - i2);
 
return (int) res;
}
 

// Cycle at last char version

AP_DECLARE(int) ap_timingsafe_strcmp(const char * hostile, const char * 
toProtect) {
const unsigned char *p1 = (const unsigned char *)hostile;
const unsigned char *p2 = (const unsigned char *)toProtect;

size_t i1 = 0 ,i2 = 0;
unsigned int d1 = 1, d2 = 1, res = 0;
 
do {
res |= (p1[i1] - p2[i2]);

d1 = !!p1[i1];
d2 = !!p2[i2];

i1 += d1;
i2 += d2;
i++;
} while (d1); // we reveal the length of the hostile string. Use 
(d1|d2) and a min run-length to hide both.

res |= (i1 - i2);
 
return (int) res;
}





Re: mod_h2 internals

2015-05-28 Thread Dirk-Willem van Gulik

 On 28 May 2015, at 16:25, Jim Jagielski j...@jagunet.com wrote:
 
 One thing I've been thinking about, and there might even be some hooks
 in trunk for it, is the idea of slave connections (or sub-connections)
 which kind of *is* a pseudo connection. So one could create a connection
 and then a sub/slave connection from that, and then use *that* for
 requests. This maintains the connection-request idea, but allows for
 multiple connections and requests per a single real connection.

That would be useful - and perhaps fold/kill some of the internal redirect 
stuff in as well.

Dw.



Re: httpd - side channel attack - timing of digest comparisons

2015-05-28 Thread Dirk-Willem van Gulik

 On 28 May 2015, at 17:03, William A Rowe Jr wr...@rowe-clan.net wrote:
 
 
 On May 26, 2015 10:31 AM, Dirk-Willem van Gulik di...@webweaving.org 
 mailto:di...@webweaving.org wrote:
 
 
   On 26 May 2015, at 17:22, Dirk-Willem van Gulik di...@webweaving.org 
   mailto:di...@webweaving.org wrote:
  ..
   So I think that what is needed are two (or three) functions
  ...
   - A string comparison function; where at least one string is is under 
   control of the attacker.
 
  Now the issue here is that length is every easily revealed. So I can think 
  of 2 strategies;
 
  -   firmly declare (in the signature of the compare function) one 
  argument as potentially hostile.
 
  And do the comparison largely based on that; which means we only 
  marginally reveal the
  actual length of the string compared to. Below is an example; but 
  my gut feel it is not
  nearly good enough when you can apply a large chunk of statistics 
  against it.
 
  -   treat them both as hostile; and scan for either the shortest or 
  longest one and accept
  that you leak something about length.
 
  Or - if needed - pad this out for strings 1024 (or similar) chars 
  in length by doing always
  that many (which will leak less).
 
  Examples are below. Suggestions appreciated.
 
  Dw.
 
  static int or_bits(int  x) {
x |= (x  4);
x |= (x  2);
x |= (x  1);
return -(x  1);
  }
 
  /* Quick mickey mouse version to compare the strings. XXX fixme.
   */
  AP_DECLARE(int) ap_timingsafe_strcmp(const char * hostile_string, const 
  char * to_protect__string) {
  const unsigned char *p1 = (const unsigned char *)hostile_string;
  const unsigned char *p2 = (const unsigned char *)to_protect__string;
  size_t i = 0, i1 = 0 ,i2 = 0;
  unsigned int res = 0;
  unsigned int d1, d2;
 
  do {
  res |= or_bits(p1[i1] - p2[i2]);
 
  d1 = -or_bits(p1[i1]);
  d2 = -or_bits(p2[i2]);
 
  i1 += d1;
  i2 += d2;
  i += (d1 | d2);
 
  #icase A
  } while (d1 | d2); // longest one will abort
  #case B
  } while (d1  d2); // shortest one will abort
  #case C
  } while (i  1024) } while (d1 | d2); // at least 1024 or longest 
  one/shortest one
 
  // include the length in the coparision; as to avoid foo v.s. 
  foofoofoo to match.
  //
  return (int) (res | ( i1 - i2));
  }
 
 Giving this some thought for the string version, does it make sense to loop 
 the underflow string back to offset zero on EOS?  There is a certain amount 
 of cache avoidance that could cause, but it would dodge the optimization of 
 that phase and ensure the longest-match comparisons are performed (measured 
 by the untrusted input, presumably).
 
So I am currently experimenting with 

https://gist.github.com/dirkx/37c29dc5a82b6deb0bf0 
https://gist.github.com/dirkx/37c29dc5a82b6deb0bf0

which seems to behave reasonably and does not get optimized out too far.  We 
could perhaps do something where we change the i1/i2 loop indexes
by str[i % i1] instead of i1/i2 that ‘stop’; and keep i1 one higher than i 
until we hit the \0.

Dw





Re: httpd - side channel attack - timing of digest comparisons

2015-05-28 Thread Dirk-Willem van Gulik

 On 28 May 2015, at 17:24, Dirk-Willem van Gulik di...@webweaving.org wrote:
 
 
 On 28 May 2015, at 17:03, William A Rowe Jr wr...@rowe-clan.net 
 mailto:wr...@rowe-clan.net wrote:
 
 
 On May 26, 2015 10:31 AM, Dirk-Willem van Gulik di...@webweaving.org 
 mailto:di...@webweaving.org wrote:
 
 
   On 26 May 2015, at 17:22, Dirk-Willem van Gulik di...@webweaving.org 
   mailto:di...@webweaving.org wrote:
  ..
   So I think that what is needed are two (or three) functions
  ...
   - A string comparison function; where at least one string is is 
   under control of the attacker.
 
  Now the issue here is that length is every easily revealed. So I can think 
  of 2 strategies;
 
  -   firmly declare (in the signature of the compare function) one 
  argument as potentially hostile.
 
  And do the comparison largely based on that; which means we only 
  marginally reveal the
  actual length of the string compared to. Below is an example; but 
  my gut feel it is not
  nearly good enough when you can apply a large chunk of statistics 
  against it.
 
  -   treat them both as hostile; and scan for either the shortest or 
  longest one and accept
  that you leak something about length.
 
  Or - if needed - pad this out for strings 1024 (or similar) chars 
  in length by doing always
  that many (which will leak less).
 
  Examples are below. Suggestions appreciated.
 
  Dw.
 
  static int or_bits(int  x) {
x |= (x  4);
x |= (x  2);
x |= (x  1);
return -(x  1);
  }
 
  /* Quick mickey mouse version to compare the strings. XXX fixme.
   */
  AP_DECLARE(int) ap_timingsafe_strcmp(const char * hostile_string, const 
  char * to_protect__string) {
  const unsigned char *p1 = (const unsigned char *)hostile_string;
  const unsigned char *p2 = (const unsigned char 
  *)to_protect__string;
  size_t i = 0, i1 = 0 ,i2 = 0;
  unsigned int res = 0;
  unsigned int d1, d2;
 
  do {
  res |= or_bits(p1[i1] - p2[i2]);
 
  d1 = -or_bits(p1[i1]);
  d2 = -or_bits(p2[i2]);
 
  i1 += d1;
  i2 += d2;
  i += (d1 | d2);
 
  #icase A
  } while (d1 | d2); // longest one will abort
  #case B
  } while (d1  d2); // shortest one will abort
  #case C
  } while (i  1024) } while (d1 | d2); // at least 1024 or longest 
  one/shortest one
 
  // include the length in the coparision; as to avoid foo v.s. 
  foofoofoo to match.
  //
  return (int) (res | ( i1 - i2));
  }
 
 Giving this some thought for the string version, does it make sense to loop 
 the underflow string back to offset zero on EOS?  There is a certain amount 
 of cache avoidance that could cause, but it would dodge the optimization of 
 that phase and ensure the longest-match comparisons are performed (measured 
 by the untrusted input, presumably).
 
 So I am currently experimenting with 
 
   https://gist.github.com/dirkx/37c29dc5a82b6deb0bf0 
 https://gist.github.com/dirkx/37c29dc5a82b6deb0bf0
 
 which seems to behave reasonably and does not get optimized out too far.  We 
 could perhaps do something where we change the i1/i2 loop indexes
 by str[i % i1] instead of i1/i2 that ‘stop’; and keep i1 one higher than i 
 until we hit the \0.

Updated the above gist with your idea (also shown below).

Dw.

/* Quick mickey mouse version to compare the strings. XXX fixme. 
 */
typedef enum ap_compareflag_t {
AP_HOSTILE_STR1 = 1,  // if a string is ‘hostile’ - we’re fine to abort 
on it early.
AP_HOSTILE_STR2 = 2, // otherwise we hide its length by checking it for 
the length of the other (or both).
AP_MIN1024 = 4   // pretend that they are at least 1024 chars in 
size.
} ap_compareflag_t;
 
AP_DECLARE(int) ap_timingsafe_strcmp(const char * str1, const char * str2, 
ap_compareflag_t flags) {
const unsigned char *p1 = (const unsigned char *)str1;
const unsigned char *p2 = (const unsigned char *)str2;
size_t i = 0, i1 = 1 ,i2 = 1;
unsigned int res = 0;
unsigned int d1, d2;
 
if (flags == 0)
flags = AP_HOSTILE_STR1 | AP_HOSTILE_STR2 | AP_MIN1024;
 
// printf(\n%s=%s\n\t,str1,str2);
do {
  do {
res |= or_bits(p1[i % i1] - p2[i % i2]);
 
d1 = -or_bits(p1[i1]);
d2 = -or_bits(p2[i2]);
 
i1 += d1;
i2 += d2;
i++;
// printf(%c, ((d1  d2) ? '=' : (d1 ? '1' : (d2 ? '2' : 
'.';
 
   } while(i  (flags  AP_MIN1024) * 256);
} while (
((flags  AP_HOSTILE_STR1)  d1) | 
((flags  AP_HOSTILE_STR2)  d2) 
);
res |= (i1 - i2);
 
// printf( = %d\n, res);
 
// include the length in the coparision; as to avoid foo v.s. foofoofoo 
to match

Re: httpd - side channel attack - timing of digest comparisons

2015-05-26 Thread Dirk-Willem van Gulik
Folks,

Did a scan through a fair bit of our code. mod_digest is not the only place; 
e.g. in basic auth; we are also
not as careful in all cases as we could be. 

So I think that what is needed are two (or three) functions

-   A fairly mundane (binary) timing safe compare that compares two fixed 
lengths (e.g. SHA or MD5) 
strings or binary buffers.

-   A string comparison function; where at least one string is is under 
control of the attacker.

As to the first - I think it can be as simple as below - as to the latter - see 
next email. As with that one I am struggling.

Dw

/* Oroginal code for ap_timingsafe_memcmp: libressl-portable under below 
license. 
 *
 * Copyright (c) 2014 Google Inc.
 *
 * Permission to use, copy, modify, and distribute this software for any
 * purpose with or without fee is hereby granted, provided that the above
 * copyright notice and this permission notice appear in all cop1es.
 *
 * THE SOFTWARE IS PROVIDED AS IS AND THE AUTHOR DISCLAIMS ALL WARRANTIES
 * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
 * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
 * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
 * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
 * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
 * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
 */

/**
 * Compare two binary buffers (of identical length) in a timing safe
 * manner. As to avoid leaking information about that what is compared
 * and thus preventing side channel leaks when for example comparing
 * a checksum or a password.
 *.
 * Both buffers are assumed to have idenitcal length.
 *
 * @param   buf1Buffer 1 to compare buffer 2 to.
 * @param   buf2Buffer 2 
 * @param   len The length of the two buffers.
 * @return  0 when identical, non zero when different.
 */
AP_DECLARE(int) ap_timingsafe_memcmp(const void *b1, const void *b2, size_t len)
{
const unsigned char *p1 = b1, *p2 = b2;
size_t i;
int res = 0, done = 0;

for (i = 0; i  len; i++) {
/* lt is -1 if p1[i]  p2[i]; else 0. */
int lt = (p1[i] - p2[i])  CHAR_BIT;

/* gt is -1 if p1[i]  p2[i]; else 0. */
int gt = (p2[i] - p1[i])  CHAR_BIT;

/* cmp is 1 if p1[i]  p2[i]; -1 if p1[i]  p2[i]; else 0. */
int cmp = lt - gt;

/* set res = cmp if !done. */
res |= cmp  ~done;

/* set done if p1[i] != p2[i]. */
done |= lt | gt;
}

return (res);
}



Re: httpd - side channel attack - timing of digest comparisons

2015-05-26 Thread Dirk-Willem van Gulik

 On 26 May 2015, at 17:22, Dirk-Willem van Gulik di...@webweaving.org wrote:
..
 So I think that what is needed are two (or three) functions
...
 - A string comparison function; where at least one string is is under 
 control of the attacker.

Now the issue here is that length is every easily revealed. So I can think of 2 
strategies;

-   firmly declare (in the signature of the compare function) one argument 
as potentially hostile.

And do the comparison largely based on that; which means we only 
marginally reveal the
actual length of the string compared to. Below is an example; but my 
gut feel it is not
nearly good enough when you can apply a large chunk of statistics 
against it.

-   treat them both as hostile; and scan for either the shortest or longest 
one and accept
that you leak something about length.

Or - if needed - pad this out for strings 1024 (or similar) chars in 
length by doing always
that many (which will leak less).

Examples are below. Suggestions appreciated. 

Dw.

static int or_bits(int  x) {
  x |= (x  4);
  x |= (x  2);
  x |= (x  1);
  return -(x  1);
}

/* Quick mickey mouse version to compare the strings. XXX fixme. 
 */
AP_DECLARE(int) ap_timingsafe_strcmp(const char * hostile_string, const char * 
to_protect__string) {
const unsigned char *p1 = (const unsigned char *)hostile_string;
const unsigned char *p2 = (const unsigned char *)to_protect__string;
size_t i = 0, i1 = 0 ,i2 = 0;
unsigned int res = 0;
unsigned int d1, d2;

do {
res |= or_bits(p1[i1] - p2[i2]);

d1 = -or_bits(p1[i1]);
d2 = -or_bits(p2[i2]);

i1 += d1;
i2 += d2;
i += (d1 | d2);

#icase A
} while (d1 | d2); // longest one will abort
#case B
} while (d1  d2); // shortest one will abort
#case C
} while (i  1024) } while (d1 | d2); // at least 1024 or longest 
one/shortest one

// include the length in the coparision; as to avoid foo v.s. foofoofoo 
to match.
//
return (int) (res | ( i1 - i2));
}

Re: Style checker?

2015-05-21 Thread Dirk-Willem van Gulik

 I still develop in what a lot of folks would consider a fairly primitive 
 environment (vi) that doesn't do anything for style checking things like line 
 width/spacing before and after control statements/indentation/variable 
 declaration/etc. I know of the indent tool available on most unix-like 
 systems, but was wondering if you folks use any other tools to help along 
 that path?

Once upon a time 
https://svn.apache.org/repos/asf/httpd/httpd/tags/1.3/APACHE_1_3a1/src/.indent.pro
 
https://svn.apache.org/repos/asf/httpd/httpd/tags/1.3/APACHE_1_3a1/src/.indent.pro
 was used which grew to about 
http://code.openhub.net/file?fid=q84uxUJ8kEH2vVim-QkGoSiCzoccid=h1J7pf7LYjws=fp=305270mpprojSelected=true#L0
 
http://code.openhub.net/file?fid=q84uxUJ8kEH2vVim-QkGoSiCzoccid=h1J7pf7LYjws=fp=305270mpprojSelected=true#L0
 — but I suspect that it is way out of date.

Dw.





Re: httpd - side channel attack - timing of digest comparisons

2015-05-21 Thread Dirk-Willem van Gulik
Very quick and dirty list of the most obvious places where we compare stuff. 
Currently trying to find some time to figure out if these are all vulnerable; 
or if it is just the two outer ones.

Dw.

Index: modules/aaa/mod_auth_digest.c
===
--- modules/aaa/mod_auth_digest.c   (revision 1680471)
+++ modules/aaa/mod_auth_digest.c   (working copy)
@@ -1394,7 +1394,7 @@
 resp-nonce[NONCE_TIME_LEN] = tmp;
 resp-nonce_time = nonce_time.time;
 
-if (strcmp(hash, resp-nonce+NONCE_TIME_LEN)) {
+if (ap_timingsafe_strcmp(hash, resp-nonce+NONCE_TIME_LEN)) {
 ap_log_rerror(APLOG_MARK, APLOG_ERR, 0, r, APLOGNO(01776)
   invalid nonce %s received - hash is not %s,
   resp-nonce, hash);
@@ -1423,7 +1423,7 @@
 }
 }
 else if (conf-nonce_lifetime == 0  resp-client) {
-if (memcmp(resp-client-last_nonce, resp-nonce, NONCE_LEN)) {
+if (ap_timingsafe_memcmp(resp-client-last_nonce, resp-nonce, 
NONCE_LEN)) {
 ap_log_rerror(APLOG_MARK, APLOG_INFO, 0, r, APLOGNO(01779)
   user %s: one-time-nonce mismatch - sending 
   new nonce, r-user);
@@ -1749,7 +1749,7 @@
 
 if (resp-message_qop == NULL) {
 /* old (rfc-2069) style digest */
-if (strcmp(resp-digest, old_digest(r, resp, conf-ha1))) {
+if (ap_timingsafe_strcmp(resp-digest, old_digest(r, resp, 
conf-ha1))) {
 ap_log_rerror(APLOG_MARK, APLOG_ERR, 0, r, APLOGNO(01792)
   user %s: password mismatch: %s, r-user,
   r-uri);
@@ -1784,7 +1784,7 @@
 /* we failed to allocate a client struct */
 return HTTP_INTERNAL_SERVER_ERROR;
 }
-if (strcmp(resp-digest, exp_digest)) {
+if (ap_timingsafe_strcmp(resp-digest, exp_digest)) {
 ap_log_rerror(APLOG_MARK, APLOG_ERR, 0, r, APLOGNO(01794)
   user %s: password mismatch: %s, r-user,
   r-uri);

Index: include/util_md5.h
===
--- include/util_md5.h  (revision 1680471)
+++ include/util_md5.h  (working copy)
@@ -64,6 +64,35 @@
  */
 AP_DECLARE(char *) ap_md5digest(apr_pool_t *p, apr_file_t *infile);
 
+/**
+ * Compare two binary buffers (of identical length) in a timing safe
+ * manner. As to avoid leaking information about that what is compared
+ * and thus preventing side channel leaks when for example comparing
+ * a checksum or a password.
+ *.
+ * Both buffers are assumed to have idenitcal length.
+ *
+ * @param   buf1Buffer 1 to compare buffer 2 to.
+ * @param   buf2Buffer 2 
+ * @param   len The length of the two buffers.
+ * @return  0 when identical, non zero when different.
+ */
+AP_DECLARE(int) ap_timingsafe_memcmp(const void *b1, const void *b2, size_t 
len)
+
+/**
+ * Compare \0 terminated strings in a time safe manner.
+ * As to avoid leaking information about that what is compared
+ * and thus preventing side channel leaks when for example comparing
+ * a checksum or a password.
+ *.
+ * Both buffers are assumed to have idenitcal length.
+ *
+ * @param   str1   pointer to a \0 terminated string,
+ * @param   str2   pointer to a \0 terminated string.
+ * @return  0 when identical or both NULL, non zero when different or one of 
the string pointers is NULL.
+ */
+AP_DECLARE(int) ap_timingsafe_strcmp(const char * str1, const char * str2);
+
 #ifdef __cplusplus
 }
 #endif

Index: server/util_md5.c
===
--- server/util_md5.c   (revision 1680471)
+++ server/util_md5.c   (working copy)
@@ -164,3 +164,137 @@
 return ap_md5contextTo64(p, context);
 }
 
+/* Oroginal code for ap_timingsafe_memcmp: libressl-portable under below 
license. 
+ *
+ * Copyright (c) 2014 Google Inc.
+ *
+ * Permission to use, copy, modify, and distribute this software for any
+ * purpose with or without fee is hereby granted, provided that the above
+ * copyright notice and this permission notice appear in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED AS IS AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+ * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+ * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+ * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+ * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ */
+
+/**
+ * Compare two binary buffers (of identical length) in a timing safe
+ * manner. As to avoid leaking information about that what is compared
+ * and thus preventing side channel leaks when for example comparing
+ * a checksum or a password.
+ *.
+ * Both buffers are assumed to have 

httpd - side channel attack - timing of digest comparisons

2015-05-21 Thread Dirk-Willem van Gulik
Folks,

security@ got a notification of a potential side channel attack. The original 
message is below (sans details on the poster who wants to remain private).

In short - we’re comparing the digest in mod-auth-digest in a manner that may 
reveal how much is actually correct; leading potentially to a timing attack.

After discussing this on security@ we surmised that the risks are not overly 
high; and that fixing this may warrant some wider discussion/more eyeballs 
across the code base for similar things.

Options discussed sofar on security@ and general thoughts are:

1)  adding a timing safe compare to util_md5.c (for now) with 
an idea to move this to APR longer term.

Besides the link below - 
https://github.com/jedisct1/libsodium/blob/master/src/libsodium/sodium/utils.c#L82
 
https://github.com/jedisct1/libsodium/blob/master/src/libsodium/sodium/utils.c#L82
 
and the openbsd one was mentioned.

2)  The mail below just comparison; there is an earlier strcmp
comparision there as well.

3)  In general - string comparisons are more messy; as there is the length 
(difference)
issue that is harder to hide. And moving the comparision into sha/md5 
space is
not trivial without a length related side channel on the checksum.

4)  Is this also a moment to reconsider the use of md5; and go to a decent 
SHA ?

5)  We have lots of other places which may need a bit of thought; Yann 
mentioned places
like: Eg. strncmp(), strlen(), str[n]cat(), memcpy() and memcmp() are 
used
by apr_md5_encode(), apr_password_validate(), apr_sha1_update(), …

Avoiding timing attacks requires at least to not use these functions
in APR crypto, and have their equivalent there, and then use them
where appropriate in httpd.

Thanks,

Dw.

  Forwarded Message 
 Subject: httpd: Side Channel Attack
 Date: Tue, 19 May 2015 18:15:57 +0700
 
 Hi There,
 
 Since memcmp() performs a byte-by-byte comparison and do not execute
 in constant-time, you need to use a better function.
 
 Vuln code which is vulnerable to timing attacks:
 
 -
 ./modules/slotmem/mod_slotmem_shm.c:215: if (memcmp(digest, digest2,
 APR_MD5_DIGESTSIZE)) {
 
 ./modules/aaa/mod_auth_digest.c:1426: if
 (memcmp(resp-client-last_nonce, resp-nonce, NONCE_LEN)) {
 
 ./modules/ssl/ssl_ct_log_config.c:282: if (memcmp(computed_log_id,
 log_id_bin, LOG_ID_SIZE)) {
 -
 
 Please take a look memcmp_nta() http://lkml.org/lkml/2013/2/10/13 
 http://lkml.org/lkml/2013/2/10/13





Re: Version check idea

2015-04-21 Thread Dirk-Willem van Gulik
On 21 Apr 2015, at 15:55, Jim Jagielski j...@jagunet.com wrote:

 For comment: What do people think about adding the capability that
 when httpd is started, it tries to access http://httpd.apache.org/doap.rdf
 to check its version number with the latest one referred to in that
 file and, if a newer one exists, it prints out a little message
 in the error.log (or stderr)…

First reaction - Eek  Evil ! Breaks about every assumption I have about well 
defined unix programmes.

However if it where something like ‘httpd C / --configcheck’ or similarily for 
apachectl then I could easily image that to be rather useful.

Dw.

Re: [Patch] mod_ssl SSL_CLIENT_CERT_SUBJECTS - access to full client certificate chain

2014-11-06 Thread Dirk-Willem van Gulik

 On 06 Nov 2014, at 14:14, Andreas B. regis...@progandy.de wrote:
 
 Am 06.11.2014 um 08:34 schrieb Dirk-Willem van Gulik:
 On 06 Nov 2014, at 07:05, Kaspar Brand httpd-dev.2...@velox.ch 
 mailto:httpd-dev.2...@velox.ch wrote:
 
 11.3.1 Certificate exact match
 …
  CertificateExactAssertion ::= SEQUENCE {
  serialNumber  CertificateSerialNumber,
  issuerName }
 ...
 (i.e., we are again back at the point that uniqueness of an X.509
 certificate is achieved by its issuer DN plus serial number)
 And even that is LDAPs take on it - in a lot of practical cases it gets more 
 complex; especially if the leave of the  one but last intermediate is cross 
 signed.
 
 Making above more of an LDAP thing than a ‘protocol’ thing.
 
 So therefore:
 
 Is there another way to do this?
 Manually performing what certificateExactMatch is specifying, I would
 say - i.e., use the (SSL_CLIENT_M_SERIAL,SSL_CLIENT_I_DN) tuple as a
 unique identifier for a specific client certificate.
 I would argue that the ‘best’ thing we can do is first on the SSL 
 termination side — follow the chain; and if there we *conclude* that all is 
 well - and we present the evidence of that conclusion ’to what is behind’.
 
 And ideally make that as hard to mis-interpret. So Graham his idea of 
 providing a single ‘unique' string which gives the DN’s in order (along with 
 the usual SSL_CLIENT.. env vars; including the full ANS.1 if you are so 
 inclined) is quite pragmatic. As that string is ‘unique’ within the promise 
 the web frontend with its current settings is making.
 
 And it cuts out a lot of easy to make errors. And those who want to do 
 better can simply use SSL_CLIENT_CERT and SSL_CLIENT_CERT_0…N — with 
 sufficient code to understand things like cross signing and funny order 
 issues. As parsing that is not trivial when there are multiple 
 selfsigned/roots in a chain ‘up’.
 
 Dw.
 If you want to identify a specific certificate, wouldn't it be possible to 
 use sha1/256 fingerprints created with X509_digest? Or is that something LDAP 
 doesn't support? That could be exported with SSL_CLIENT_CERT_THUMB and 
 SSL_CLIENT_CERT_THUMB_ALG for the algorithm used.

One could. The issue is that in some systems (e.g. a medical one I am currently 
trying to come to terms with) the certs ar renewed very often; and the 
fingerprint does not stay stable. This is also an issue with using the serial 
and the issuer DN.

Dw.

Re: [Patch] mod_ssl SSL_CLIENT_CERT_SUBJECTS - access to full client certificate chain

2014-11-05 Thread Dirk-Willem van Gulik

 On 06 Nov 2014, at 07:05, Kaspar Brand httpd-dev.2...@velox.ch wrote:
 
 11.3.1 Certificate exact match
 …
  CertificateExactAssertion ::= SEQUENCE {
  serialNumber  CertificateSerialNumber,
  issuerName }
...
 (i.e., we are again back at the point that uniqueness of an X.509
 certificate is achieved by its issuer DN plus serial number)

And even that is LDAPs take on it - in a lot of practical cases it gets more 
complex; especially if the leave of the  one but last intermediate is cross 
signed.

Making above more of an LDAP thing than a ‘protocol’ thing.

So therefore:

 Is there another way to do this?
 
 Manually performing what certificateExactMatch is specifying, I would
 say - i.e., use the (SSL_CLIENT_M_SERIAL,SSL_CLIENT_I_DN) tuple as a
 unique identifier for a specific client certificate.

I would argue that the ‘best’ thing we can do is first on the SSL termination 
side — follow the chain; and if there we *conclude* that all is well - and we 
present the evidence of that conclusion ’to what is behind’.

And ideally make that as hard to mis-interpret. So Graham his idea of providing 
a single ‘unique' string which gives the DN’s in order (along with the usual 
SSL_CLIENT.. env vars; including the full ANS.1 if you are so inclined) is 
quite pragmatic. As that string is ‘unique’ within the promise the web frontend 
with its current settings is making.

And it cuts out a lot of easy to make errors. And those who want to do better 
can simply use SSL_CLIENT_CERT and SSL_CLIENT_CERT_0…N — with sufficient code 
to understand things like cross signing and funny order issues. As parsing that 
is not trivial when there are multiple selfsigned/roots in a chain ‘up’.

Dw.

Fwd: CVE-2014-3671: DNS Reverse Lookup as a vector for the Bash vulnerability (CVE-2014-6271 et.al.)

2014-10-13 Thread Dirk-Willem van Gulik
Folks,

You may have spotted below going out.

Hits us tangentially as ap_get_remote_host() gets thus populated; and it can 
then potentially hit a bash shell through variables set in places like ssl, 
setenvif, rewrote, the logger and ajp.

Though by then it is more about input sanitising. I guess one could make a case 
for APR to be a bit more careful with ‘wild’ reverse lookup returns. 

But then again - one should long term gravitate towards UTF8 safeness; so that 
would suggest that dealing with it later in the chain is better.

Dw.


find_allowdeny

 Begin forwarded message:
 
 Date: 13 Oct 2014 12:03:26 CEST
 From: Dirk-Willem van Gulik di...@webweaving.org
 To: di...@webweaving.org
 Subject: CVE-2014-3671: DNS Reverse Lookup as a vector for the Bash 
 vulnerability (CVE-2014-6271 et.al.)
 
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 Security Advisory 
 
   DNS Reverse Lookup as a vector for the Bash vulnerability (CVE-2014-6271 
 et.al.)
 
   CVE-2014-3671
 
 references:
 CVE-2014-6271, CVE-2014-7169, CVE-2014-6277, CVE-2014-6278 
 CVE-2014-7186 and, CVE-2014-7187
 
 * Summary:
 
 Above CVEs detail a number of flaws in bash prior related to the parsing 
 of environment variables  (aka BashBug, Shellshock). Several networked
 vectors for triggering this bug have been discovered; such as through
 dhcp options and CGI environment variables in webservers [1].
 
 This document is to advise you of an additional vector; through a 
 reverse lookup in DNS; and where the results of this lookup are
 passed, unsanitized, to an environment variable (e.g. as part of
 a batch process). 
 
 This vector is subtly different from a normal attack vector, as the
 attacker can 'sit back' and let a (legitimate) user trigger the
 issue; hence keeping the footprint for a IDS or WAAS to act on small.
 
 * Resolvers/systems affected:
 
 At this point of time the stock resolvers (in combination with the libc
 library) of OSX 10.9 (all versions) and 10.10/R2 are the only known
 standard installations that pass the bash exploit string back and
 up to getnameinfo(). 
 
 That means that UNpatched systems are vulnerable through this vector
 PRIOR to the bash update documented in http://support.apple.com/kb/DL1769.
 
 Most other OS-es (e.g. RHEL6, Centos, FreeBSD 7 and up, seem 
 unaffected in their stock install as libc/libresolver and DNS use 
 different escaping mechanisms (octal v.s. decimal).
 
 We're currently following investing a number of async DNS resolvers
 that are commonly used in DB cache/speed optimising products and
 application level/embedded firewall systems.
 
 Versions affected: 
 
 See above CVEs as your primary source.
 
 * Resolution and Mitigation:
 
 In addition to the mitigations listed in above CVEs - IDSes and similar 
 systems may be configured to parse DNS traffic in order to spot the 
 offending strings.
 
 Also note that Apple DL1769 addresses the Bash issue; NOT the vector
 through the resolver. 
 
 * Reproducing the flaw:
 
 A simple zone file; such as:
 
 $TTL 10;
 $ORIGIN in-addr.arpa.
 @ IN SOA ns.boem.wleiden.net dirkx.webweaving.org (
666; serial
360 180 3600 1800 ; very short lifespan.
)
 IN  NS 127.0.0.1
 *   PTR  () { :;}; echo CVE-2014-6271, CVE-201407169, RDNS 
 
 can be used to create an environment in which to test the issue with existing 
 code
 or with the following trivial example:
 
#include sys/socket.h
#include netdb.h
#include assert.h
#include arpa/inet.h
#include stdio.h
#include stdlib.h
#include unistd.h
#include netinet/in.h
 
int main(int argc, char ** argv) {
 struct in_addr addr;
 struct sockaddr_in sa;
 char host[1024];
 
 assert(argc==2);
 assert(inet_aton(argv[1],addr) == 1);
 
 sa.sin_family = AF_INET;
 sa.sin_addr = addr;
 
 assert(0==getnameinfo((struct sockaddr *)sa, sizeof sa,
  host, sizeof host, NULL, 0, NI_NAMEREQD));
 
 printf(Lookup result: %s\n\n, host);
 
 assert(setenv(REMOTE_HOST,host,1) == 0);
 execl(/bin/bash,NULL);
}
 
 
 Credits and timeline
 
 The flaw was found and reported by Stephane Chazelas (see CVE-2014-6271
 for details).  Dirk-Willem van Gulik (dirkx(at)webweaving.org) found
 the DNS reverse lookup vector.
 
 09-04-2011 first reported.
 2011, 2014 issue verified on various embedded/firewall/waas
   systems and reported to vendors. 
 ??-09-2014 Apple specific exploited seen.
 11-10-2014 Apple confirms that with DL1769 in place that
   The issue that remains, while it raises 
   interesting questions, is not a security 
   issue in and of itself.
 
 * Common Vulnerability Scoring (Version 2) and vector:
 
 See CVE-2014-6271.
 
 1:https://github.com/mubix/shellshocker-pocs/blob/master/README.md)
 1.10 / : 1726

Re: PROXY_WORKER_MAX_NAME_SIZE

2014-08-29 Thread Dirk-Willem van Gulik

 On 29 Aug 2014, at 21:05, Jim Jagielski j...@jagunet.com wrote:
 
 FWIW, this is reported in
 
https://issues.apache.org/bugzilla/show_bug.cgi?id=53218
 
 I was thinking of a dual-approach: Increase PROXY_WORKER_MAX_NAME_SIZE
 and making the truncation of the worker name (s-name) non-fatal
 (but logged at ALERT)...
 

I've been bitten by this quite a few times as well. And always when you least 
expect it. 

Wondering if it is time to push long names into a uuid or hash; with a 
translation table/db file if needed. 

Dw. 

 On Aug 29, 2014, at 2:27 PM, Jim Jagielski j...@jagunet.com wrote:
 
 I'd like to propose that we bump up PROXY_WORKER_MAX_NAME_SIZE
 again, both in trunk but also allow for backporting to
 2.4.x as well.
 


smime.p7s
Description: S/MIME cryptographic signature


Re: Odd - SSLCipherSuite

2014-05-19 Thread Dirk-Willem van Gulik

Op 17 mei 2014, om 14:15 heeft Dr Stephen Henson 
shen...@opensslfoundation.com het volgende geschreven:

 On 14/05/2014 10:23, Dirk-Willem van Gulik wrote:
 Now I must be getting rusty - we have in the config file
 
 SSLCipherSuite -ALL:ECDHE-RSA-AES256-SHA
 SSLProtocol -ALL +TLSv1.1 +TLSv1.2 +SSLv3
 
 with the first resolving nicely with
 
 openssl ciphers -ALL:ECDHE-RSA-AES256-SHA
 
 to just
 
 ECDHE-RSA-AES256-SHA
 
 
 Unusual syntax though that should work. I'd normally just use the single
 ciphersuite name in the string:
 
 ECDHE-RSA-AES256-SHA

That still gives us the same results.

 
 So my assumption is that this server will insist on talking above - and =
 nothing else.
 
 And on the wire - if I observer the Server Hello I see:
 
 Secure Sockets Layer
   TLSv1.2 Record Layer: Handshake Protocol: Server Hello
 ...
   Cipher Suite: TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 =
 (0xc030)
 
 which is sort of what i expect. 
 
 
 I wouldn't expect that as that isn't the single ciphersuite you've specified.

Ok.

 However when I throw 
 
 https://www.ssllabs.com/ssltest/analyze.html
 
 their analyzer at it - it seems to be quite able to convince the server =
 to say hello=92s with
 
SSLv3 Record Layer: Handshake Protocol: Server Hello
   Content Type: Handshake (22)
Version: SSL 3.0 (0x0300)
 ...
Cipher Suite: TLS_RSA_WITH_RC4_128_MD5 (0x0004)
 
 or
 
   TLSv1.2 Record Layer: Handshake Protocol: Server Hello
 ...
   Cipher Suite: TLS_DHE_RSA_WITH_DES_CBC_SHA (0x0015)
 
 And so on*. I must be missing something very obvious here! Am I
 misunderstanding SSLCipherSuite or is there something specific about 1.2 
 which
 makes certain things mandatory and not under control of SSLCipherSuite? 
 
 
 It looks like OpenSSL isn't receiving that cipher string properly or if it is
 being overridden by something else possible elsewhere in the config file. You
 can probe individual ciphersuites using s_client like this:
 
 openssl s_client -connect www.hostname.com:443 \
   -cipher ECDHE-RSA-AES256-GCM-SHA384
 
 If it isn't supported the connection shouldn't complete.

Right - yet it does - and matches the suites found by www.ssllabs.com as well. 
I’ll instrument OpenSSL a bit to see
what it actually receives and thinks it is doing.

Perhaps apache manages to confuse some context.

Dw.

Re: Odd - SSLCipherSuite

2014-05-19 Thread Dirk-Willem van Gulik

Op 19 mei 2014, om 15:04 heeft Dirk-Willem van Gulik di...@webweaving.org het 
volgende geschreven:

 Op 17 mei 2014, om 14:15 heeft Dr Stephen Henson 
 shen...@opensslfoundation.com het volgende geschreven:
 On 14/05/2014 10:23, Dirk-Willem van Gulik wrote:
 Now I must be getting rusty - we have in the config file
 
 SSLCipherSuite -ALL:ECDHE-RSA-AES256-SHA
 SSLProtocol -ALL +TLSv1.1 +TLSv1.2 +SSLv3
 
 with the first resolving nicely with
 
 openssl ciphers -ALL:ECDHE-RSA-AES256-SHA
 
 to just
 
 ECDHE-RSA-AES256-SHA
...
 So my assumption is that this server will insist on talking above - and =
 
 nothing else.
 
 And on the wire - if I observer the Server Hello I see:
 
 Secure Sockets Layer
   TLSv1.2 Record Layer: Handshake Protocol: Server Hello
 ...
   Cipher Suite: TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 =
 (0xc030)
 
 which is sort of what i expect. 
…..
 However when I throw 
 
 
 https://www.ssllabs.com/ssltest/analyze.html
 
 their analyzer at it - it seems to be quite able to convince the server =
 to say hello=92s with
 
SSLv3 Record Layer: Handshake Protocol: Server Hello
   Content Type: Handshake (22)
Version: SSL 3.0 (0x0300)
 ...
Cipher Suite: TLS_RSA_WITH_RC4_128_MD5 (0x0004)
….
 
 It looks like OpenSSL isn't receiving that cipher string properly or if it is
 being overridden by something else possible elsewhere in the config file. You
 can probe individual ciphersuites using s_client like this:
 
 openssl s_client -connect www.hostname.com:443 \
  -cipher ECDHE-RSA-AES256-GCM-SHA384
 
 If it isn't supported the connection shouldn't complete.
 
 Right - yet it does - and matches the suites found by www.ssllabs.com as 
 well. I’ll instrument OpenSSL a bit to see
 what it actually receives and thinks it is doing.
 
 Perhaps apache manages to confuse some context.

Ok - so OpenSSL is not at fault. It is in apache config land that we confuse 
contexts between
virtualhosts; the ___default__:443, the *:443 and the ‚base’ virtual hosts - 
and I think that this
is almost a ‚must’ as soon as SNI is active. And we cannot really solve it with 
-ALL or !ALL.

Will dig a bit deeper - but my guess is that the ‚best’ solution may well be a 
WARN flag if we
detect an ‚override’ on the same ssl context and/or an INFO flag that shows the 
per VHOST
actual result.

Will puzzle a bit,

Dw



Re: Odd - SSLCipherSuite

2014-05-16 Thread Dirk-Willem van Gulik

Op 14 mei 2014, om 19:10 heeft Plüm, Rüdiger, Vodafone Group 
ruediger.pl...@vodafone.com het volgende geschreven:

 Which Apache version do you use?

Below was with:

Apache/2.4.9 
OpenSSL 1.0.1e-freebsd

but I reverted to that from a patched/hacked build from HEAD while 
investigating the issue. Does this ring a bell?

Dw.


 Von: Dirk-Willem van Gulik [mailto:di...@webweaving.org] 
 Gesendet: Mittwoch, 14. Mai 2014 11:23
 An: dev@httpd.apache.org
 Betreff: Odd - SSLCipherSuite
  
 Now I must be getting rusty - we have in the config file
 
   SSLCipherSuite -ALL:ECDHE-RSA-AES256-SHA
   SSLProtocol -ALL +TLSv1.1 +TLSv1.2 +SSLv3
 
 with the first resolving nicely with
 
   openssl ciphers -ALL:ECDHE-RSA-AES256-SHA
 
 to just
 
   ECDHE-RSA-AES256-SHA
 
 So my assumption is that this server will insist on talking above - and =
 nothing else.
 
 And on the wire - if I observer the Server Hello I see:
 
   Secure Sockets Layer
   TLSv1.2 Record Layer: Handshake Protocol: Server Hello
   ...
   Cipher Suite: TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 =
 (0xc030)
 
 which is sort of what i expect. 
 
 However when I throw 
 
   https://www.ssllabs.com/ssltest/analyze.html
 
 their analyzer at it - it seems to be quite able to convince the server =
 to say hello=92s with
 
   SSLv3 Record Layer: Handshake Protocol: Server Hello
   Content Type: Handshake (22)
   Version: SSL 3.0 (0x0300)
   ...
   Cipher Suite: TLS_RSA_WITH_RC4_128_MD5 (0x0004)
 
 or
 
TLSv1.2 Record Layer: Handshake Protocol: Server Hello
   ...
Cipher Suite: TLS_DHE_RSA_WITH_DES_CBC_SHA (0x0015)

 And so on*. I must be missing something very obvious here! Am I  
 misunderstanding SSLCipherSuite or is there something specific about 1.2 
 which makes certain things mandatory and not under control of SSLCipherSuite? 
 
 Dw.
 
 
 
 
 * besides Cipher Suite: =
 TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (0xc030) 
 Server Hello=92s with 
 
 Cipher Suite: TLS_DHE_RSA_WITH_3DES_EDE_CBC_SHA (0x0016)
Cipher Suite: TLS_DHE_RSA_WITH_AES_128_CBC_SHA (0x0033)
Cipher Suite: TLS_DHE_RSA_WITH_AES_128_CBC_SHA256 (0x0067)
Cipher Suite: TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 (0x009e)
Cipher Suite: TLS_DHE_RSA_WITH_AES_256_CBC_SHA (0x0039)
Cipher Suite: TLS_DHE_RSA_WITH_AES_256_CBC_SHA256 (0x006b)
Cipher Suite: TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 (0x009f)
Cipher Suite: TLS_DHE_RSA_WITH_CAMELLIA_128_CBC_SHA (0x0045)
Cipher Suite: TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA (0x0088)
Cipher Suite: TLS_DHE_RSA_WITH_DES_CBC_SHA (0x0015)
Cipher Suite: TLS_DHE_RSA_WITH_SEED_CBC_SHA (0x009a)
Cipher Suite: TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA (0xc012)
Cipher Suite: TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (0xc013)
Cipher Suite: TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 (0xc027)
Cipher Suite: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (0xc02f)
Cipher Suite: TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (0xc014)
Cipher Suite: TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 (0xc028)
Cipher Suite: TLS_ECDHE_RSA_WITH_RC4_128_SHA (0xc011)
Cipher Suite: TLS_RSA_WITH_3DES_EDE_CBC_SHA (0x000a)
Cipher Suite: TLS_RSA_WITH_AES_128_CBC_SHA (0x002f)
Cipher Suite: TLS_RSA_WITH_AES_128_CBC_SHA256 (0x003c)
Cipher Suite: TLS_RSA_WITH_AES_128_GCM_SHA256 (0x009c)
Cipher Suite: TLS_RSA_WITH_AES_256_CBC_SHA (0x0035)
Cipher Suite: TLS_RSA_WITH_AES_256_CBC_SHA256 (0x003d)
Cipher Suite: TLS_RSA_WITH_AES_256_GCM_SHA384 (0x009d)
Cipher Suite: TLS_RSA_WITH_CAMELLIA_128_CBC_SHA (0x0041)
Cipher Suite: TLS_RSA_WITH_CAMELLIA_256_CBC_SHA (0x0084)
Cipher Suite: TLS_RSA_WITH_DES_CBC_SHA (0x0009)
Cipher Suite: TLS_RSA_WITH_IDEA_CBC_SHA (0x0007)
Cipher Suite: TLS_RSA_WITH_RC4_128_MD5 (0x0004)
Cipher Suite: TLS_RSA_WITH_RC4_128_SHA (0x0005)
Cipher Suite: TLS_RSA_WITH_SEED_CBC_SHA (0x0096)



Odd - SSLCipherSuite

2014-05-14 Thread Dirk-Willem van Gulik
Now I must be getting rusty - we have in the config file

SSLCipherSuite -ALL:ECDHE-RSA-AES256-SHA
SSLProtocol -ALL +TLSv1.1 +TLSv1.2 +SSLv3

with the first resolving nicely with

openssl ciphers -ALL:ECDHE-RSA-AES256-SHA

to just

ECDHE-RSA-AES256-SHA

So my assumption is that this server will insist on talking above - and =
nothing else.

And on the wire - if I observer the Server Hello I see:

Secure Sockets Layer
TLSv1.2 Record Layer: Handshake Protocol: Server Hello
...
Cipher Suite: TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 =
(0xc030)

which is sort of what i expect. 

However when I throw 

https://www.ssllabs.com/ssltest/analyze.html

their analyzer at it - it seems to be quite able to convince the server =
to say hello=92s with

SSLv3 Record Layer: Handshake Protocol: Server Hello
Content Type: Handshake (22)
Version: SSL 3.0 (0x0300)
...
Cipher Suite: TLS_RSA_WITH_RC4_128_MD5 (0x0004)

or

   TLSv1.2 Record Layer: Handshake Protocol: Server Hello
...
   Cipher Suite: TLS_DHE_RSA_WITH_DES_CBC_SHA (0x0015)
   
And so on*. I must be missing something very obvious here! Am I  
misunderstanding SSLCipherSuite or is there something specific about 1.2 which 
makes certain things mandatory and not under control of SSLCipherSuite? 

Dw.




* besides Cipher Suite: =
TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (0xc030) 
Server Hello=92s with 

Cipher Suite: TLS_DHE_RSA_WITH_3DES_EDE_CBC_SHA (0x0016)
   Cipher Suite: TLS_DHE_RSA_WITH_AES_128_CBC_SHA (0x0033)
   Cipher Suite: TLS_DHE_RSA_WITH_AES_128_CBC_SHA256 (0x0067)
   Cipher Suite: TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 (0x009e)
   Cipher Suite: TLS_DHE_RSA_WITH_AES_256_CBC_SHA (0x0039)
   Cipher Suite: TLS_DHE_RSA_WITH_AES_256_CBC_SHA256 (0x006b)
   Cipher Suite: TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 (0x009f)
   Cipher Suite: TLS_DHE_RSA_WITH_CAMELLIA_128_CBC_SHA (0x0045)
   Cipher Suite: TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA (0x0088)
   Cipher Suite: TLS_DHE_RSA_WITH_DES_CBC_SHA (0x0015)
   Cipher Suite: TLS_DHE_RSA_WITH_SEED_CBC_SHA (0x009a)
   Cipher Suite: TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA (0xc012)
   Cipher Suite: TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (0xc013)
   Cipher Suite: TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 (0xc027)
   Cipher Suite: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (0xc02f)
   Cipher Suite: TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (0xc014)
   Cipher Suite: TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 (0xc028)
   Cipher Suite: TLS_ECDHE_RSA_WITH_RC4_128_SHA (0xc011)
   Cipher Suite: TLS_RSA_WITH_3DES_EDE_CBC_SHA (0x000a)
   Cipher Suite: TLS_RSA_WITH_AES_128_CBC_SHA (0x002f)
   Cipher Suite: TLS_RSA_WITH_AES_128_CBC_SHA256 (0x003c)
   Cipher Suite: TLS_RSA_WITH_AES_128_GCM_SHA256 (0x009c)
   Cipher Suite: TLS_RSA_WITH_AES_256_CBC_SHA (0x0035)
   Cipher Suite: TLS_RSA_WITH_AES_256_CBC_SHA256 (0x003d)
   Cipher Suite: TLS_RSA_WITH_AES_256_GCM_SHA384 (0x009d)
   Cipher Suite: TLS_RSA_WITH_CAMELLIA_128_CBC_SHA (0x0041)
   Cipher Suite: TLS_RSA_WITH_CAMELLIA_256_CBC_SHA (0x0084)
   Cipher Suite: TLS_RSA_WITH_DES_CBC_SHA (0x0009)
   Cipher Suite: TLS_RSA_WITH_IDEA_CBC_SHA (0x0007)
   Cipher Suite: TLS_RSA_WITH_RC4_128_MD5 (0x0004)
   Cipher Suite: TLS_RSA_WITH_RC4_128_SHA (0x0005)
   Cipher Suite: TLS_RSA_WITH_SEED_CBC_SHA (0x0096)

Re: DOS-Protection: RequestReadTimeout-like option missing

2013-05-23 Thread Dirk-Willem van Gulik

On 11 May 2013, at 20:26, Reindl Harald h.rei...@thelounge.net wrote:

 after the connection is established and in case of connect
 you have already passed the TCP transmissions and kernel
 settings like
 
 net.ipv4.tcp_fin_timeout = 5
 net.ipv4.tcp_retries1 = 5
 net.ipv4.tcp_syn_retries = 5
 net.ipv4.tcp_synack_retries = 5

The way I usually deal with this is three fold - and I think that it a) behoves 
apache/traffic servr to allow admins to configure this in widely varying ways 
while b) have somewhat sane middle of the road settings.

-   Use *bsd/linux accept filters or similar just in front to 
not kick off too much of the http stack without having the
client invest in sending a few decent bytes and a bit of
TCP stack state.

In practice, with mobile clients, I've found it hard to
be too strict. Many seconds may pass before a long haul
connection has its 'GET' sent.

Above tcp settings are appropriate for certain modern
internet sites in modernly wired countries - but would
not see them as universally good.

So not much we should do I guess - beyond a couple
of best practice hints covering 2 or 3 widely
different scenario's.

-   Try to avoid too much code waking up to deal with the
GET, and subsequently, too much code from lingering
around after the entire reply is generated just to
'sent out' this reply byte by byte. Which often is as
simple as a small proxy in front.

Likewise - this is not universally good - as there are
plenty of situations where local skill  incident response
patterns would cause such an approach to make things
less (instead of more) reliable.

So again - we can just suggest some best pratice.

-   Hand off (or kill) seriously lengthy things to a special
polling/async (i.e. single thread/process 'ab' select/poll 
style with linked lists buffer/pointers) process. E.g.
using things like inter process file descriptor passing
or some N-Path thing at TCP level.

Which is again - just one of the many ways to approach
things. And again - lots of variants possible.

So am doubtful if this sort of knowledge should be part of the default. 

Think that those settings should be fairly conservative - designed to work in a 
wide range of settings. 

Even if that means you can hog resources remotely with relative ease - as it is 
hard to 
know ahead of time if this is a enterprise-server sending large java generated 
blobs to people on a local LAN or a small server doing short ajax-y replies to 
mobile clients with 10's of seconds idleness in lots of parallel connections.

Just my 2 pence. 

:-) 
-- Dw.




smime.p7s
Description: S/MIME cryptographic signature


Re: URL scanning by bots

2013-05-03 Thread Dirk-Willem van Gulik

On 3 mei 2013, at 10:55, Marian Marinov m...@yuhu.biz wrote:
 
 If Apache by default delays 404s, this may have some effect in the first 
 month or two after the release of this change. But then the the botnet 
 writers will learn and update their software.
 I do believe that these guys are monitoring mailing lists like these or at 
 least reading the change logs of the most popular web servers.
 So, I believe that such change would have a very limited impact on the whole 
 Internet or at least will be combated fairly easy.

FWIIW - the same sentiments where expressed when 'greylisting[1]' in SMTP came 
in vogue. For small relays (speaking just from personal experience and from the 
vantage of my own private tiny MTA's) that has however not been the case. 
Greylisting did dampen things significantly - and the effect lasts to this day.

But agreed - I think it is important to deal with this issue differently that 
with the overload issue.

Dw.

1: http://en.wikipedia.org/wiki/Greylisting

smime.p7s
Description: S/MIME cryptographic signature


Re: URL scanning by bots

2013-05-01 Thread Dirk-Willem van Gulik
On 1 mei 2013, at 13:31, Graham Leggett minf...@sharp.fm wrote:
 
 The evidence was just explained - a bot that does not get an answer quick 
 enough gives up and looks elsewhere.
 The key words are looks elsewhere.


For what it is worth - I've been experimenting with this (up till about 6 
months ago) on a machine of mine. Having the 200, 403, 404, 500 etc determined 
by an entirely unscientific 'modulo' of the IP address. Both on the main URL as 
well as on a few PHP/plesk hole URLs. And have ignored/behaved normal for any 
source IP that has (ever) fetched robot.txt from the same IP masked by the 
first 20 bits.

That showed that bot's indeed slowdown/do-not-come back so soon if you give 
them a 403 or similar - but I saw no differences as to which non 200 you give 
them (not tried slow reply or no reply). Do note though that I was focusing on 
naughty non-robot.txt fetching bots.

Dw





smime.p7s
Description: S/MIME cryptographic signature


Re: URL scanning by bots

2013-05-01 Thread Dirk-Willem van Gulik
I think we're mixing three issues

1)  Prevent Starvation.

protecting a server from server side/machine starvation (i.e. running 
out of file descriptors, sockets, mbuf's, whatever).

So here you are in the domain where there is no argument in terms of 
protocol violation/bad citizens; there simply
is no resource - some something gotta give.

So in this case - having hard cap's on certain things is helpful. And 
if for certain setups the current MaxXXX are
not good enough (e.g. to keep a slowlaris in check as per normal modus 
operandi) - then that needs to be fixed.

IMHO this is something dear to me, as a developer. And it should be 
fixed 'by default'. I.e. a normal configured
apache should behave sensible in situations like this.

And this is 'alway'  the case.

There is a special case of this - and that is of sites which to some 
extend have shot themselves in the foot
with a 'bad' architecture; where somehow some application/language 
environment has a tendency to
eat rather a lot of resources; and leave (too little) to the httpd 
daemon.

2)  Dealing with Noise

Generally observing that on a quiet server some 30-60% (speaking for 
myself here) of your 'load' is caused
by dubious/robotic stuff which does not benefit 'you' as the site owner 
with a desire to be found/public directly
or even indirectly (e.g. like a webcrawler of a search engine would do).

And this then begs the question - what can I, as a site owner (rather 
than a developer) configure - even if
that means that I 'break the internet' a bit. 

And this is something you decide on a per server basis; for long 
periods of time.

3)  Dealing with Attacks.

Dealing with specific attempts to overload a server one way or the 
other; with the intention to lock
other users out.

For most of us, and unlike above two examples, a specific situation; 
and one which probably last
relatively short (days, weeks).

And for which you are willing to do fairly draconian things - even if 
that significantly breaks normal
internet standard behaviour.

Would be useful to discuss each separately. As I do agree with some of the 
posters that apache is (no longer)
that strong* in area 1 and 3. And some modernisation may do wonders,

Thanks,

Dw

*: i.e. meaning that one tends to do highly specific things when dealing with 
issues like that; and while
   generally (very) effective - they are not part of core apache lore  default 
config.

smime.p7s
Description: S/MIME cryptographic signature


Re: Small things to do

2011-11-08 Thread Dirk-WIllem van Gulik

On 8 Nov 2011, at 23:03, Daniel Ruggeri wrote:

 On 11/8/2011 3:10 PM, Stefan Fritsch wrote:
* mod_ssl's proxy support only allows one proxy client certificate per
  frontend virtual host. Lift this restriction.
  jim sez: Why a blocker?, pgollucci +1 jim
  wrowe asks: what's the API change required?
 
 I'm not sure I understand this one... does anyone have the history to
 elaborate?
 

Three things really - in order of priority:

-   Specify a specific client cert per proxy-pass or other Location and so 
on.

-   Be able to have a bunch of client certs respond/get picked right 
(narrowest) when the server gives a list of acceptable authorities.

-   Be able to lock a specific client cert down to a cert in the chain of 
the servers issuer; or to the DN/etc of the server.

Though the latter/last is easily worked around with by having multiple vhosts 
wrapped around.

Dw

smime.p7s
Description: S/MIME cryptographic signature


Re: Infinite data stream from a non-HTTPD external process via HTTPD

2011-09-20 Thread Dirk-Willem van Gulik

On 20 Sep 2011, at 10:41, Henrik Strand wrote:

 On Tue, 2011-09-20 at 11:32 +0200, Ben Noordhuis wrote:

 On Tue, Sep 20, 2011 at 11:13, Henrik Strand henrik.str...@axis.com wrote:

 I would like to send an infinite data stream from a non-HTTPD external
 process via HTTPD to the client connection. Both HTTP and HTTPS must be
 supported.
 
 What kind of external process are we talking here? Something that
 prints to stdout, listens on a UNIX/TCP socket, something else?

 A process, running on the same system as httpd, that will generate a
 video stream (i.e., a stream of images and audio).

I found it very effective to use a socket pass (see Stevens[1] - or for the 
simplest case; see

http://httpd.apache.org/docs/2.3/mod/mod_proxy_fdpass.html

Or google on (WSADuplicateSocket, ioctl(I_SENDFD), sendmsg() with access 
rights, etc).  Unfortunately they are not quite as portable as one would want 
to.  Some old postings [2] may help, check the apr socket code. As you need to 
do a bit of work on the receiving side.

Let me know if you get totally stuck - I may have some old cruft - though that 
was more to then pass on to NPath - a proprietary load balancing technique to 
move the TCP connection to another server altogether while having the return 
path bypassing the LB hardware itself. 

Dw.

1: http://www.amazon.com/dp/0131411551/
2: http://archives.neohapsis.com/archives/postfix/2000-09/1476.html
http://lists.canonical.org/pipermail/kragen-hacks/2002-January/000292.html



Re: Next update

2011-09-01 Thread Dirk-Willem van Gulik

On 1 Sep 2011, at 12:06, Ben Laurie wrote:

 On Wed, Aug 31, 2011 at 9:03 PM, Dirk-WIllem van Gulik
 di...@webweaving.org wrote:
 Suggestion for
 
http://people.apache.org/~dirkx/CVE-2011-3192.txt
 
 You probably mean deprecated not desecrated, amusing though that is.
 
Darn Functional MRI - them spell checkers are getting scarily good at reading 
my mind !

Thanks Ben,

Dw. 

Re: Another regression regarding byteranges

2011-09-01 Thread Dirk-Willem van Gulik

On 1 Sep 2011, at 13:33, Jim Jagielski wrote:

 
 On Sep 1, 2011, at 6:31 AM, Plüm, Rüdiger, VF-Group wrote:
 I already fixed that in trunk.
 I think this regression justifies another release for 2.2.x. But IMHO we 
 should wait at least until
 mid next week to see if other regressions come thru and hit them all with a 
 2.2.21.
 
 
 +1
 

Ok - so this makes it sound we really should get the advisory out. Shall I 
update it with some caveats and stay tuned - but still make it FINAL ?

Or should we make this an update - and not declare final victory ?

Dw.

Re: Next update

2011-08-31 Thread Dirk-WIllem van Gulik

On 31 Aug 2011, at 18:20, William A. Rowe Jr. wrote:

 Note some additional improvements for a 'final' update 3 advisory…

Ack!  Draft coming in half an hour or so,

Dw.



Re: Next update

2011-08-31 Thread Dirk-WIllem van Gulik

On 26 Aug 2011, at 18:05, William A. Rowe Jr. wrote:

 On 8/26/2011 11:41 AM, Eric Covener wrote:
 Should we bump the 5's in the draft advisory and/or code to a more
 liberal #?  At the very least for the 2.0 rewrite solution that will
 return forbidden instead of full content?
 
 Can we please avoid sending more advisories without a canonical link
 to a wiki recommendations/discussion/observations page?  As admins have
 already adopted the 5's, they are in the best position to tell us what
 broke, based on feedback from their audience.
 
 I'll start one shortly.

URL ?

Dw.



Re: Next update

2011-08-31 Thread Dirk-WIllem van Gulik
Suggestion for

http://people.apache.org/~dirkx/CVE-2011-3192.txt

to be sent to announce and the usual security places.

-  Comments on weaken/strenghten 1.3 text

Happy to completely recant that it was vulnerable. Or happy to keep a 
bit of a warning in there.

-  Lots of small tweaks.

-  Do we leave the 200/206 chunked/full range caveats in - or is that no 
longer the case ?

Thanks,

Dw.

Re: Next update

2011-08-31 Thread Dirk-WIllem van Gulik

On 31 Aug 2011, at 21:03, Dirk-WIllem van Gulik wrote:

 Suggestion for
 
   http://people.apache.org/~dirkx/CVE-2011-3192.txt
 
 to be sent to announce and the usual security places.
 
 -Comments on weaken/strenghten 1.3 text
 
   Happy to completely recant that it was vulnerable. Or happy to keep a 
 bit of a warning in there.
 
 -Lots of small tweaks.
 
 -Do we leave the 200/206 chunked/full range caveats in - or is that no 
 longer the case ?
 
 Thanks,

Ah - before I forget - also fine to not do it this heavy handed - but to sent 
Jim his message to users/devs@ to these security places as well.

But am slightly biased to towards an advisory of this size - as it helps admins 
in large organizations negotiate priorities with their ops teams, bosses and 
others.

Dw.

vote for FINAL announce

2011-08-31 Thread Dirk-WIllem van Gulik

On 31 Aug 2011, at 21:07, Dirk-WIllem van Gulik wrote:

  http://people.apache.org/~dirkx/CVE-2011-3192.txt

Off to bed now. If there is consensus we are 'done' then a vote on either of 
these options would be of use

1   as above

2   as above but with complete reversal of the 1.3 situation - state it is 
totally not affected (i.e. when configure right).

3   as above - but with a large caveat that we're not yet happy with this - 
follow this wiki page and keep an eye out for the release notes of the next 
releases.

Aiming at 1000Z tomorrow - as that gives folks in big companies a lot of 
planning room - and is not yet hitting the 
friday-thou-shall-not-break-things-before-the-weekend.

Thanks,

Dw.



Re: Advisory: Range header DoS vulnerability Apache HTTPD 1.3/2.x (CVE-2011-3192)

2011-08-31 Thread Dirk-WIllem van Gulik
Folks,

See below - for the 1.3 discussion - that suggest we should take it a notch 
down:

On 31 Aug 2011, at 22:35, Munechika Sumikawa wrote:

 We're currently discussing this - and will propably adjust the
 announcement a bit. It is vulnerable in that it can suddenly take a
 lot more CPU, memory and resources when 'attacked'. And the response
 is worse than pure linear. But unlike 2.0 and 2.2 it does not
 exploded as exponential. So at this point I am expecting us to
 conclude that 1.3 is 'as affected' as most other servers
 implementing this protocol; not due to a fault in the code - but
 more to a fault in the protocol desgin.
 
 Does that make sense ?
 
 Let me confirm the code.  Apache 1.3 allocates only several bytes per
 each byte-range to record first-pos and last-pos.  And the memory is
 released immediately after the HTTP session is disconnected.  Thus,
 it's impossible a cracker succeed to DoS 1.3x server with the paranoia
 range header.  Am I correct?
 
 If so, IMO Apache 1.3's behavior should be normal case.  More
 complicated pattern based on the designed protocol eat up more
 resources than simpler pattern.  That always happens in any protocols.
 (e.g. IP fragmentation)
 
 I think it's stll in scope of linear even though it's not pure
 linear.

Which makes good sense. And looking at the default 1.3 configs on the standard 
platforms of that time - it is indeed not really apache which is at fault.

Dw.



Wrapup -- Was: 2.2 approach for byterange?

2011-08-29 Thread Dirk-WIllem van Gulik
Folks,

How do we wrap this up in what remains of the (US) day ? 

I can have ready a final draft or a 'more mitigation' draft  - of the stay 
tuned type. 

Since Advisory update-2 I gotten about 5 emails with helpful (but small) 
improvements for what we have; 3 with some detailed feedback - and a lot of 
reports that the fixes are good enough for most. But nothing major.

So the only reason to sent out a update rather than a final would be that we 
expect the final one to be 24+ hours -- OR if we expect the final one to be 
hard for admins to roll (which we have no indication of).

Correct ?

What are people their thoughts ? Fair to assume we can have in the next 6 to 12 
hours:

-   Patch for 2.2 well tested

-   Backport for 2.0 - ideally well tested too 

Or do we need to roll a complete version (guess not) ? Any verbiage to include 
on this fix not being the final one (IMHO no - IETF pointer is good enough).

Who is in the right timezone to carry the ball the last few meters over the 
finish line?

Thanks,

Dw

Re: svn commit: r1162560 - /httpd/httpd/trunk/modules/http/byterange_filter.c

2011-08-28 Thread Dirk-Willem van Gulik

On 28 Aug 2011, at 18:12, rpl...@apache.org wrote:

 URL: http://svn.apache.org/viewvc?rev=1162560view=rev
 Log:
 * Silence compiler warning

 +apr_off_t pofft = 0;


This may have been more than that - it fixes my testcase on FreeBSD with all 
optimizing set to max (OSX and Ubuntu where already fine).

Dw.



Next update

2011-08-26 Thread Dirk-Willem van Gulik
Folks - as we're not quit there yet - I want to do sent out an updated advisory 
at 11:00 UTC. We have enough new information and extra mitigations. Will post 
the draft(s) to security@ this time.

Secondly - I got below updates to the regex-es; to optimise the pcre 
expressions and remove the exhaustive match:

from
   SetEnvIf Range (,.*?){5,} bad-range=1
to
   SetEnvIf Range (?:,.*?){5,} bad-range=1

from:
   RewriteCond %{HTTP:range} !(^bytes=[^,]+(,[^,]+){0,4}$|^$)
to:
   RewriteCond %{HTTP:range} !(?:^bytes=[^,]+(?:,[^,]+){0,4}$|^$)

Please pipe up if you see issues with those,

Thanks

Dw.

PS: Committers - if you are not subscribed to security@ - now is a good time :)



Re: Next update

2011-08-26 Thread Dirk-Willem van Gulik
On 26 aug. 2011, at 10:47, Dirk-Willem van Gulik wrote:

 Folks - as we're not quit there yet - I want to do sent out an updated 
 advisory at 11:00 UTC. We have enough new information and extra mitigations. 
 Will post the draft(s) to security@ this time.

Apologies for the rush and blindsiding some people. But the friendly folks out 
there had finally cottoned on my accidental comment in the module code that 
request-range was equally affected. 

So I thought it appropriate to release the advisory as is; with all the regex 
improvements and other fixes.

Thanks.

Dw.


Apache HTTPD Security ADVISORY
==
  UPDATE 2

Title:   Range header DoS vulnerability Apache HTTPD 1.3/2.x

CVE: CVE-2011-3192
Last Change: 20110826 1030Z
Date:20110824 1600Z
Product: Apache HTTPD Web Server
Versions:Apache 1.3 all versions, Apache 2 all versions

Changes since last update
=
In addition to the 'Range' header - the 'Range-Request' header is equally
affected. Furthermore various vendor updates, improved regexes (speed and
accommodating a different and new attack pattern).

Description:


A denial of service vulnerability has been found in the way the multiple 
overlapping ranges are handled by the Apache HTTPD server:

   http://seclists.org/fulldisclosure/2011/Aug/175 

An attack tool is circulating in the wild. Active use of this tool has 
been observed.

The attack can be done remotely and with a modest number of requests can 
cause very significant memory and CPU usage on the server. 

The default Apache HTTPD installation is vulnerable.

There is currently no patch/new version of Apache HTTPD which fixes this 
vulnerability. This advisory will be updated when a long term fix 
is available. 

A full fix is expected in the next 24 hours. 

Background and the 2007 report
==

There are two aspects to this vulnerability. One is new, is Apache specific; 
and resolved with this server side fix. The other issue is fundamentally a 
protocol design issue dating back to 2007:

http://seclists.org/bugtraq/2007/Jan/83 

The contemporary interpretation of the HTTP protocol (currently) requires a 
server to return multiple (overlapping) ranges; in the order requested. This 
means that one can request a very large range (e.g. from byte 0- to the end) 
100's of times in a single request. 

Being able to do so is an issue for (probably all) webservers and currently 
subject of an IETF discussion to change the protocol:

http://trac.tools.ietf.org/wg/httpbis/trac/ticket/311

This advisory details a problem with how Apache httpd and its so called 
internal 'bucket brigades' deal with serving such valid request. The
problem is that currently such requests internally explode into 100's of 
large fetches, all of which are kept in memory in an inefficient way. This
is being addressed in two ways. By making things more efficient. And by 
weeding out or simplifying requests deemed too unwieldy.

Mitigation:
===

There are several immediate options to mitigate this issue until a full fix 
is available. Below examples handle both the 'Range' and the legacy
'Request-Range' with various levels of care. 

Note that 'Request-Range' is a legacy name dating back to Netscape Navigator 
2-3 and MSIE 3. Depending on your user community - it is likely that you
can use option '3' safely for this older 'Request-Range'.

1) Use SetEnvIf or mod_rewrite to detect a large number of ranges and then
 either ignore the Range: header or reject the request.

 Option 1: (Apache 2.2)

# Drop the Range header when more than 5 ranges.
# CVE-2011-3192
SetEnvIf Range (?:,.*?){5,5} bad-range=1
RequestHeader unset Range env=bad-range

# We always drop Request-Range; as this is a legacy
# dating back to MSIE3 and Netscape 2 and 3.
RequestHeader unset Request-Range

# optional logging.
CustomLog logs/range-CVE-2011-3192.log common env=bad-range
CustomLog logs/range-CVE-2011-3192.log common env=bad-req-range

 Above may not work for all configurations. In particular situations
 mod_cache and (language) modules may act before the 'unset'
 is executed upon during the 'fixup' phase.

 Option 2: (Pre 2.2 and 1.3)

# Reject request when more than 5 ranges in the Range: header.
# CVE-2011-3192
#
RewriteEngine on
RewriteCond %{HTTP:range} !(bytes=[^,]+(,[^,]+){0,4}$|^$)
# RewriteCond %{HTTP:request-range} !(bytes=[^,]+(?:,[^,]+){0,4}$|^$)
RewriteRule .* - [F]

# We always drop Request-Range; as this is a legacy
# dating back to MSIE3 and Netscape 2 and 3.
RequestHeader unset Request-Range

 The number 5 is arbitrary. Several 10's should not be an issue and may be
 required for sites which for example serve PDFs to very high end eReaders
 or use things

Advisory improvement

2011-08-26 Thread Dirk-Willem van Gulik
From the Full Disclosure list. Does anyone have time to confirm this 
improvement.

On 26 Aug 2011, at 12:09, Carlos Alberto Lopez Perez wrote:
 RewriteEngine on
 RewriteCond %{HTTP:range} !(^bytes=[^,]+(,[^,]+){0,4}$|^$) [NC,OR]
 RewriteCond %{HTTP:request-range} !(^bytes=[^,]+(,[^,]+){0,4}$|^$) [NC]
 RewriteRule .* - [F]
 
 Because if you don't specify the [OR] apache will combine the rules
 making an AND (and you don't want this!).
 
 Also use NC=(nocase) to prevent the attacker upper casing bytes=
 (don't know if it will work.. but just to prevent)

Pretty Please !

Thanks,

Dw.




Re: Next update

2011-08-26 Thread Dirk-Willem van Gulik

On 26 aug. 2011, at 18:35, Guenter Knauf wrote:

 Hi Dirk,
 Am 26.08.2011 12:44, schrieb Dirk-Willem van Gulik:
 4) Deploy a Range header count module as a temporary stopgap measure:
 
http://people.apache.org/~dirkx/mod_rangecnt.c
 you have a cp error for the 1.3.x part:
 
 --- mod_rangecnt.c.orig Fri Aug 26 18:30:08 2011
 +++ mod_rangecnt.c  Fri Aug 26 18:32:21 2011
 @@ -104,7 +104,7 @@
 }
 
 #ifndef AP_MODULE_DECLARE_DATA
 -module MODULE_VAR_EXPORT usertrack_module = {
 +module MODULE_VAR_EXPORT rangecnt_module = {

Fixed. And thanks to a weird testing quirck - this actually worked (mod user 
track probably did not :) ). 

Dw.




Re: Final draft / CVE-2011-3192

2011-08-25 Thread Dirk-Willem van Gulik
Thanks. Added to the interim draft update.

Dw.

On 25 Aug 2011, at 06:36, Steffen wrote:

 For Mitigation of Apache Range Header DoS Attack with mod_security, see
 also:
 
 http://blog.spiderlabs.com/2011/08/mitigation-of-apache-range-header-dos-attack.html
 
 
 - Original Message -
 From: Dirk-Willem van Gulik di...@webweaving.org
 Newsgroups: gmane.comp.apache.devel
 To: secur...@httpd.apache.org; dev@httpd.apache.org
 Sent: Wednesday, August 24, 2011 5:34 PM
 Subject: Final draft / CVE-2011-3192
 
 
 Thanks for all the help. All fixes included. Below will go out to announce
 at the top of the hour - unless I see a veto.
 
 Dw.
 
 
 
 
 Title:CVE-2011-3192: Range header DoS vulnerability Apache HTTPD 1.3/2.x
   Apache HTTPD Security ADVISORY
 
 Date: 20110824 1600Z
 Product:  Apache HTTPD Web Server
 Versions: Apache 1.3 all versions, Apache 2 all versions
 
 Description:
 
 
 A denial of service vulnerability has been found in the way the multiple
 overlapping ranges are handled by the Apache HTTPD server:
 
  http://seclists.org/fulldisclosure/2011/Aug/175
 
 An attack tool is circulating in the wild. Active use of this tools has
 been observed.
 
 The attack can be done remotely and with a modest number of requests can
 cause very significant memory and CPU usage on the server.
 
 The default Apache HTTPD installation is vulnerable.
 
 There is currently no patch/new version of Apache HTTPD which fixes this
 vulnerability. This advisory will be updated when a long term fix
 is available.
 
 A full fix is expected in the next 48 hours.
 
 Mitigation:
 
 
 However there are several immediate options to mitigate this issue until
 that time.
 
 1) Use SetEnvIf or mod_rewrite to detect a large number of ranges and then
either ignore the Range: header or reject the request.
 
Option 1: (Apache 2.0 and 2.2)
 
   # drop Range header when more than 5 ranges.
   # CVE-2011-3192
   SetEnvIf Range (,.*?){5,} bad-range=1
   RequestHeader unset Range env=bad-range
 
   # optional logging.
   CustomLog logs/range-CVE-2011-3192.log common env=bad-range
 
Option 2: (Also for Apache 1.3)
 
   # Reject request when more than 5 ranges in the Range: header.
   # CVE-2011-3192
   #
   RewriteEngine on
   RewriteCond %{HTTP:range} !(^bytes=[^,]+(,[^,]+){0,4}$|^$)
   RewriteRule .* - [F]
 
The number 5 is arbitrary. Several 10's should not be an issue and may be
required for sites which for example serve PDFs to very high end eReaders
or use things such complex http based video streaming.
 
 2) Limit the size of the request field to a few hundred bytes. Note that
 while
this keeps the offending Range header short - it may break other headers;
such as sizeable cookies or security fields.
 
   LimitRequestFieldSize 200
 
Note that as the attack evolves in the field you are likely to have
to further limit this and/or impose other LimitRequestFields limits.
 
See: http://httpd.apache.org/docs/2.2/mod/core.html#limitrequestfieldsize
 
 3) Use mod_headers to completely dis-allow the use of Range headers:
 
   RequestHeader unset Range
 
Note that this may break certain clients - such as those used for
e-Readers and progressive/http-streaming video.
 
 4) Deploy a Range header count module as a temporary stopgap measure:
 
  http://people.apache.org/~dirkx/mod_rangecnt.c
 
Precompiled binaries for some platforms are available at:
 
 http://people.apache.org/~dirkx/BINARIES.txt
 
 5) Apply any of the current patches under discussion - such as:
 

 http://mail-archives.apache.org/mod_mbox/httpd-dev/201108.mbox/%3ccaapsnn2po-d-c4nqt_tes2rrwizr7urefhtkpwbc1b+k1dq...@mail.gmail.com%3e
 
 Actions:
 
 
 However there are several immediate options to mitigate this issue until
 that time.
 
 1) Use SetEnvIf or mod_rewrite to detect a large number of ranges and then
either ignore the Range: header or reject the request.
 
Option 1: (Apache 2.0 and 2.2)
 
   # drop Range header when more than 5 ranges.
   # CVE-2011-3192
   SetEnvIf Range (,.*?){5,} bad-range=1
   RequestHeader unset Range env=bad-range
 
   # optional logging.
   CustomLog logs/range-CVE-2011-3192.log common env=bad-range
 
Option 2: (Also for Apache 1.3)
 
   # Reject request when more than 5 ranges in the Range: header.
   # CVE-2011-3192
   #
   RewriteEngine on
   RewriteCond %{HTTP:range} !(^bytes=[^,]+(,[^,]+){0,4}$|^$)
   RewriteRule .* - [F]
 
The number 5 is arbitrary. Several 10's should not be an issue and may be
required for sites which for example serve PDFs to very high end eReaders
or use things such complex http based video streaming.
 
 2) Limit the size of the request field to a few hundred bytes. Note

Re: Fixing Ranges

2011-08-25 Thread Dirk-Willem van Gulik

On 24 Aug 2011, at 22:16, Stefan Fritsch wrote:

 I have another idea: Instead of using apr_brigade_partition write a
 new function ap_brigade_copy_part that leaves the original brigade
 untouched. It would copy the necessary buckets to a new brigade and
 then split the first and last of those copied buckets as necessary and
 destroy the excess buckets. AFAICS, this would reduce the quadratic
 growth into linear. Do you think that would solve our problems?

This looks really rather need - it seems to tackle the core issue - that of 
memory; and it also makes it better for legitimate request with many ranges in 
it (while I never noticed before - going back and looking at them - them PDF 
requests do take a lot of memory).

Dw.



Next update on CVE-2011-3192

2011-08-25 Thread Dirk-Willem van Gulik
I am keeping a draft at

http://people.apache.org/~dirkx/CVE-2011-3192.txt

Changes since last are:

-   version ranges more specific
-   vendor information added
-   backgrounder on relation to 2007 issues (see below to ensure I got this 
right).

I suggest we sent this out late Z time today (i.e. end of working day US) _if_ 
1) it is likely that we do not have a firm timeline for the full fix and 2) we 
have a bit more to add. Otherwise we skip to a final update with the fixing 
instructions for 2.0 and 2.2

Feedback welcome,

Thanks,

Dw.

Re: Fixing Ranges

2011-08-25 Thread Dirk-Willem van Gulik

On 25 Aug 2011, at 12:40, Jim Jagielski wrote:

 Tested and this does appear to both address the DoS as well as
 reduce memory usage for excessive range requests…
 
 +1 for adding this no matter what.

Yup - same here. Makes PDF serving a heck of a lot better too.

Dw.

 
 On Aug 24, 2011, at 7:38 PM, Stefan Fritsch wrote:
 
  On Thursday 25 August 2011, Greg Ames wrote:
  On Wed, Aug 24, 2011 at 5:16 PM, Stefan Fritsch s...@sfritsch.de
  wrote:
  I have another idea: Instead of using apr_brigade_partition write
  a new function ap_brigade_copy_part that leaves the original
  brigade untouched. It would copy the necessary buckets to a new
  brigade and then split the first and last of those copied
  buckets as necessary and destroy the excess buckets. AFAICS,
  this would reduce the quadratic growth into linear. Do you think
  that would solve our problems?
 
  How does apr_brigade_partition contribute to quadratic growth?
  Does the original brigade end up with a lot of one byte buckets?
 
  Yes, it splits the buckets in the original brigade, creating up to two
  new buckets for every range. These split one-byte buckets are then
  copied again for each of the subsequent ranges.
 
  The attached PoC patch does not change the original brigade and seems
  to fix the DoS for me. It needs some more work and some review for
  integer overflows, though. (apr_brigade_partition does some
  interesting things there).
  range-linear.diff
 
 



Re: Fixing Ranges

2011-08-25 Thread Dirk-Willem van Gulik

On 25 Aug 2011, at 15:48, Plüm, Rüdiger, VF-Group wrote:

 +1 for 2.3, -0 for 2.2. I guess for 2.2 we should only detect misuse (the 
 definition of misuse
 needs to configurable) and reply with a 200 if misuse is detected.

Ok - given the patch a good test - and actually - am not sure we need even that 
(much) abuse detection. 

I find it darn hard to kill the server now; and would almost say that we can 
leave it as is. And PERHAPS have an optional cap on the number of ranges (Set 
at unlimited by default -leaving it to LimitRequestFieldSize to cap it off at 
effectively some 1200-1500).

Thanks

Dw

Re: Fixing Ranges

2011-08-25 Thread Dirk-Willem van Gulik

On 25 Aug 2011, at 17:45, Greg Ames wrote:

 On Thu, Aug 25, 2011 at 10:25 AM, Jim Jagielski j...@jagunet.com wrote:
 Now that the memory utilz is being fixed, we need to determine
 what sort of usage we want to allow… I'm guessing that people
 are ok with http://trac.tools.ietf.org/wg/httpbis/trac/ticket/311
 as the guiding spec?
 
 I'm no longer convinced that we *need* to merge/optimize the Range header.  
 Stefan's fix to the brigade plumbing has converted O(N^2) memory + CPU use 
 into something scalable.  

Same feeling here. BUT that is based on the FreeBSD platform I care about. And 
we have noticed earlier that Ubuntu (which its default commit pattern in the 
vm) behaves very differently. 

So is it fair to assume this reasoning holds across the board -- or should we 
get some very explicit +1's on this appraoch from key linux, windows and what 
not users ?

Dw

Re: Truly minor inconsistency in mod_rangecnt.c

2011-08-25 Thread Dirk-Willem van Gulik

On 25 Aug 2011, at 15:53, Tom Evans wrote:

 I wasn't sure whether to mail this in, it is inconsequential; the
 module is supposed to count the number of ranges, but it actually
 counts the number of commas between ranges, leading to an off-by-one.
 IE, a request with 6 ranges would not be rejected, where as the code
 has #define MAXRANGEHEADERS (5).

Yup - spot on - that is indeed a bug. And actually - with what we know
now - that number should probably be a 100 or so.

 Its truly minor, but made my test tool to determine whether a server
 is vulnerable to give some false positives, as it was sending 5 ranges
 and expecting a 417.

But lets fix it fixed :)

Thanks!

Dw.


CVE-2011-3192 - NeXT update ?

2011-08-25 Thread Dirk-WIllem van Gulik
Folks,

What is wisdom? We have an updated version at 
people.apache.org/CVE-2011-3192.txt. 

i'd say, let's send this of day if we expect the full patch to take another 24+ 
hours. As there is a need for the i proved mitigations  And otherwise skip it 
and go to final ASAP?

What is your take ?

Thanks,

Dw.
-- 
Dirk-Willem van Gulik.

CVE (Was: DoS with mod_deflate range requests)

2011-08-24 Thread Dirk-Willem van Gulik
Folks,

Have we done (or who is doing a CVE) on this ? So we get immediate 'fixes' out 
like a tiny patch to count the comma's, a caveated LimitRequestFieldSize 100 or 
a clever Regex on %{HTTP_Range}.

Or am I totally asleep and missed the CVE (as my google foo only nets me 
CVE-2005-2728 right now - which is from 2005!).

Dw.

Mitigation Range header (Was: DoS with mod_deflate range requests)

2011-08-24 Thread Dirk-Willem van Gulik
Folks,

This issue is now active in the wild. So some unified/simple comms is needed. 

What is the wisdom on mitigation advise/briefing until a proper fix it out - in 
order of ease:

-  Where possible - disable mod_deflate

= we sure this covers all cases - or this is a good stopgap ?

-  Where possible - set LimitRequestFieldSize to a small value

-  Suggesting of 128 fine ?

-  Where this is not possible (e.g. long cookies, auth headers of serious 
size) consider using
mod_rewrite to not accept more than a few commas

=  anyone a config snipped for this ?

-  Perhaps a stop gap module

http://people.apache.org/~dirkx/mod_rangecnt.c (is this kosher??)

-  Apply patch XXX from the mailing list

Any thoughts ? Followed by a - upgrade as soon as a release is made

Thanks,

Dw

CVE-2011-3192 (Was: CVE (Was: DoS with mod_deflate range requests))

2011-08-24 Thread Dirk-Willem van Gulik
The new Range: header has been given the CVE of

CVE-2011-3192

Please use that in subjects, commits and what not.

Thanks,

Dw.

On 24 Aug 2011, at 09:28, Dirk-Willem van Gulik wrote:

 Folks,
 
 Have we done (or who is doing a CVE) on this ? So we get immediate 'fixes' 
 out like a tiny patch to count the comma's, a caveated LimitRequestFieldSize 
 100 or a clever Regex on %{HTTP_Range}.
 
 Or am I totally asleep and missed the CVE (as my google foo only nets me 
 CVE-2005-2728 right now - which is from 2005!).
 
 Dw.
 



CVE-2011-3192: Range header DoS vulnerability in Apache 1.3 and Apache 2 (DRAFT)dev

2011-08-24 Thread Dirk-Willem van Gulik
Comments please. Esp. on the quality and realisticness of the mitigtions.


Title:  CVE-2011-3192: Range header DoS vulnerability in Apache 1.3 and 
Apache 2
Date:   20110824 1600Z
# Last Updated:  20110824 1600Z
Product:   Apache Web Server
Versions:  Apache 1.3 all versions, Apache 2 all versions

Description:


A denial of service vulnerability has been found in the way the multiple 
overlapping ranges are handled by apache 
(http://seclists.org/fulldisclosure/2011/Aug/175). It most commonly manifests 
itself when static content is made available with compression on the fly 
through mod_deflate. 

This is a very common (the default right!?) configuration. 

The attack can be done remotely and with a modest number of requests leads to 
very significant memory and CPU usage. 

Active use of this tools has been observed in the wild.

There is currently no patch/new version of apache which fixes this 
vulnerability. This advisory will be updated when a long term fix is available. 
A fix is expected in the next 96 hours. 

Mitigation:


However are several immediate options to mitigate this issue until that time:

1)  Disable compression-on-the-fly by:

1)  removing mod_deflate as a loaded module and/or by removing any 
AddOutputFilterByType/SetOutputFilter DEFLATE entries.

2)  Disable it with BrowserMatch .* no-gzip

See:http://httpd.apache.org/docs/2.0/mod/mod_deflate.html
http://httpd.apache.org/docs/2.2/mod/mod_deflate.html

2)  Use mod_headers to dis-allow the use of Range headers:

RequestHeader unset Range 

Note that this may break certain clients - such as those used for
e-Readers and progressive/http-streaming video.

2)  Limit the size of the request field to a few hundred bytes. Note that 
while this
keeps the offending Range header short - it may break other headers; 
such as sizable
cookies or security fields. 

LimitRequestFieldSize 200

Note that as the attack evolves in the field you are likely to have
to further limit this and/or impose other LimitRequestFields limits.

See:
http://httpd.apache.org/docs/2.2/mod/core.html#limitrequestfieldsize

3)  Deploy a Range header count module as a temporary stopgap measure:

http://people.apache.org/~dirkx/mod_rangecnt.c

4)  Apply any of the current patches under discussion - such as:


http://mail-archives.apache.org/mod_mbox/httpd-dev/201108.mbox/%3ccaapsnn2po-d-c4nqt_tes2rrwizr7urefhtkpwbc1b+k1dq...@mail.gmail.com%3e

Actions:

Apache HTTPD users are advised to investigate wether they are vulnerable (e.g. 
allow Range headers and use mod_deflate) and consider implementing any of the 
above mitigations. 

Planning:


This advisory will be updated when a fix/patch or new release is available. A 
patch or new apache release for Apache 2.0 and 2.2 is expected in the next 96 
hours. Note that, while popular, Apache 1.3 is deprecated. 





Re: Mitigation Range header (Was: DoS with mod_deflate range requests)

2011-08-24 Thread Dirk-Willem van Gulik

On 24 Aug 2011, at 12:57, Plüm, Rüdiger, VF-Group wrote:

 -   Where possible - disable mod_deflate
  
  = we sure this covers all cases - or this is a good stopgap ?
 
 As said this has *nothing* to do with mod_deflate. This was IMHO just
 a guess by the original author of the tool.

Ok - but when I try it on my servers (with the check of the tool removed)  - it 
seems quite impotent unless mod_deflate is in the wire.

And it seems a bit more potent when there is other 'keep in the air' modules 
around.

So I guess mod_deflate is right now the largest 'plug' we have in the server 
which can cause this backup ?

Or is that totally wrong. Happy to stand correctede !


 -   Where possible - set LimitRequestFieldSize to a small value
 
  -  Suggesting of 128 fine ?
 
 -   Where this is not possible (e.g. long cookies, auth 
 headers of serious size) consider using
  mod_rewrite to not accept more than a few commas
 
  =  anyone a config snipped for this ?
 
 How about the following (untested) rewrite rule. It should only allow 5
 ranges at max.
 
 RewriteCond %{HTTP:range} ^bytes=[^,]+(,[^,]+){0,4}$
 RewriteRule .* - [F]


Sounds like a plan ! This mail crossed one I just sent out - lemme update that 
too.

Dw.

Re: CVE-2011-3192: Range header DoS vulnerability in Apache 1.3 and Apache 2 (DRAFT-2)

2011-08-24 Thread Dirk-Willem van Gulik

*   Updated with Rudigers comments. 

*   Do we have consensus that the deflate stuff needs to go out - is not 
relevant ?

*   More Comments please. Esp. on the quality and realisticness of the 
mitigtions.

Thanks,

Title:  CVE-2011-3192: Range header DoS vulnerability in Apache 1.3 and 
Apache 2
Date:   20110824 1600Z
# Last Updated:  20110824 1600Z
Product:   Apache Web Server
Versions:  Apache 1.3 all versions, Apache 2 all versions

Description:


A denial of service vulnerability has been found in the way the multiple 
overlapping ranges are handled by apache 
(http://seclists.org/fulldisclosure/2011/Aug/175). It most commonly manifests 
itself when static content is made available with compression on the fly 
through mod_deflate - but other modules which buffer and/or generate content 
in-memory are likely to be affected as well. 

This is a very common (the default right!?) configuration. 

The attack can be done remotely and with a modest number of requests leads to 
very significant memory and CPU usage. 

Active use of this tools has been observed in the wild.

There is currently no patch/new version of apache which fixes this 
vulnerability. This advisory will be updated when a long term fix is available. 
A fix is expected in the next 96 hours. 

Mitigation:


However are several immediate options to mitigate this issue until that time:

1)  Use mod_headers to dis-allow the use of Range headers:

RequestHeader unset Range 

Note that this may break certain clients - such as those used for
e-Readers and progressive/http-streaming video.

2)  Use mod_rewrite to limit the number of ranges:

RewriteCond %{HTTP:range} ^bytes=[^,]+(,[^,]+){0,4}$
RewriteRule .* - [F]

3)  Limit the size of the request field to a few hundred bytes. Note that 
while this
keeps the offending Range header short - it may break other headers; 
such as sizable
cookies or security fields. 

LimitRequestFieldSize 200

Note that as the attack evolves in the field you are likely to have
to further limit this and/or impose other LimitRequestFields limits.

See:
http://httpd.apache.org/docs/2.2/mod/core.html#limitrequestfieldsize

3)  Deploy a Range header count module as a temporary stopgap measure:

http://people.apache.org/~dirkx/mod_rangecnt.c

4)  If your server (only) server static content then disable 
compression-on-the-fly by:

1)  removing mod_deflate as a loaded module and/or by removing any 
AddOutputFilterByType/SetOutputFilter DEFLATE entries.

2)  Disable it with BrowserMatch .* no-gzip

See:http://httpd.apache.org/docs/2.0/mod/mod_deflate.html
http://httpd.apache.org/docs/2.2/mod/mod_deflate.html

5)  Apply any of the current patches under discussion - such as:


http://mail-archives.apache.org/mod_mbox/httpd-dev/201108.mbox/%3ccaapsnn2po-d-c4nqt_tes2rrwizr7urefhtkpwbc1b+k1dq...@mail.gmail.com%3e

Actions:

Apache HTTPD users are advised to investigate wether they are vulnerable (e.g. 
allow Range headers and use mod_deflate) and consider implementing any of the 
above mitigations. 

Planning:


This advisory will be updated when a fix/patch or new release is available. A 
patch or new apache release for Apache 2.0 and 2.2 is expected in the next 96 
hours. Note that, while popular, Apache 1.3 is deprecated. 







Re: Mitigation Range header

2011-08-24 Thread Dirk-WIllem van Gulik

On 24 Aug 2011, at 13:22, Florian Weimer wrote:

 * Plüm, Rüdiger, VF-Group:
 
 As said this has *nothing* to do with mod_deflate. This was IMHO just
 a guess by the original author of the tool.
 
 This matches my testing, too.  I see a significant peak in RAM usage on
 a server where apachectl -M does not print anything with the string
 deflate (so I assume that mod_deflate is not enabled).  This is with
 2.2.9-10+lenny9 on Debian.
 
 If it is more difficult to check if mod_deflate is enabled, the advisory
 should tell how to check your server.  If the method I used is the
 correct one, I don't think it's reasonable to suggest disabling
 mod_deflate as a mitigation because it does not seem to make much of a
 difference.

Hmm - when I remove mod_deflate (i.e. explicitly as it is the default in all 
our installs) and test on a / entry which is a static file which is large 
(100k)* - then I cannot get apache on its knees on a freebsd machine - 
saturating the 1Gbit connection it has (Note: the attack machines *are* getting 
saturated).  The moment i put in mod_deflate, mod_external filter, etc - it is 
much easier to get deplete enough resources to notice.

Dw.

*: as I cannot reproduce the issue with very small index.html files.




Re: Mitigation Range header

2011-08-24 Thread Dirk-WIllem van Gulik

On 24 Aug 2011, at 13:43, Florian Weimer wrote:

 * Dirk-WIllem van Gulik:
 
 Hmm - when I remove mod_deflate (i.e. explicitly as it is the default
 in all our installs) and test on a / entry which is a static file
 which is large (100k)* - then I cannot get apache on its knees on a
 freebsd machine - saturating the 1Gbit connection it has (Note: the
 attack machines *are* getting saturated).  The moment i put in
 mod_deflate, mod_external filter, etc - it is much easier to get
 deplete enough resources to notice.
 
 Oh.  Have you checked memory usage on the server?

I had not - and you are right - quite high. I also tried it on a Ubuntu machine 
- and that one dies right out of the gate - regardless as to wether deflate is 
on- or off.

So I guess this is somewhat OS specific - but indeed - not overly deflate 
specific. Deflate just does something.

Ok - let me rewrite advisory draft !

Dw

Re: CVE-2011-3192: Range header DoS vulnerability in Apache 1.3 and Apache 2 (DRAFT-3)

2011-08-24 Thread Dirk-WIllem van Gulik
*   Folks - do we also need to add Request-Range ?

*   Updated with Rudigers comments., Eric, Florians

*   Consensus that the deflate stuff needs to go out reflected.

*   More Comments please. Esp. on the quality and realisticness of the 
mitigtions.

*   Is this the right list (and order) of the mitigations - or should 
ReWrite be first ?

*   Timeline mentioning fine (we've never done that before) -- or best 
avoided ?

My plan is to wait for the US to fully wake up - and then call for a few quick 
+1's to get this out - ideally before 1600 zulu.

Thanks,

Dw.







Title:  CVE-2011-3192: Range header DoS vulnerability in Apache 1.3 and 
Apache 2
Date:   20110824 1600Z
# Last Updated:  20110824 1600Z
Product:   Apache Web Server
Versions:  Apache 1.3 all versions, Apache 2 all versions

Description:


A denial of service vulnerability has been found in the way the multiple 
overlapping ranges are handled by apache 
(http://seclists.org/fulldisclosure/2011/Aug/175).  An attack tool is 
circulating in the wild. Active use of this tools has been observed.

The attack can be done remotely and with a modest number of requests leads to 
very significant memory and CPU usage. 

The default apache installation is vulnerable.

There is currently no patch/new version of apache which fixes this 
vulnerability. This advisory will be updated when a long term fix is available. 
A fix is expected in the next 96 hours. 

Mitigation:


However are several immediate options to mitigate this issue until that time:

1)  Use mod_headers to dis-allow the use of Range headers:

RequestHeader unset Range 

Note that this may break certain clients - such as those used for
e-Readers and progressive/http-streaming video.

2)  Use mod_rewrite to limit the number of ranges:

RewriteCond %{HTTP:range} !^bytes=[^,]+(,[^,]+){0,4}$
RewriteRule .* - [F]

3)  Limit the size of the request field to a few hundred bytes. Note that 
while this
keeps the offending Range header short - it may break other headers; 
such as sizable
cookies or security fields. 

LimitRequestFieldSize 200

Note that as the attack evolves in the field you are likely to have
to further limit this and/or impose other LimitRequestFields limits.

See:
http://httpd.apache.org/docs/2.2/mod/core.html#limitrequestfieldsize

3)  Deploy a Range header count module as a temporary stopgap measure:

http://people.apache.org/~dirkx/mod_rangecnt.c

5)  Apply any of the current patches under discussion - such as:


http://mail-archives.apache.org/mod_mbox/httpd-dev/201108.mbox/%3ccaapsnn2po-d-c4nqt_tes2rrwizr7urefhtkpwbc1b+k1dq...@mail.gmail.com%3e


Actions:
---
Apache HTTPD users are advised to investigate wether they are vulnerable (e.g. 
allow use of the Range header )and consider implementing any of the above 
mitigations immediately. 

When using a third party attack tool to verify vulnerability - know that most 
of the versions in the wild currently check for the presence of mod_deflate; 
and will (mis)report that your server is not vulnerable if this module is not 
present. This vulnerability is not dependent on presence or absence of that 
module.

Planning:
-

This advisory will be updated when a fix/patch or new release is available. A 
patch or new apache release for Apache 2.0 and 2.2 is expected in the next 96 
hours. Note that, while popular, Apache 1.3 is deprecated. 









Re: Mitigation Range header

2011-08-24 Thread Dirk-WIllem van Gulik

On 24 Aug 2011, at 14:01, Plüm, Rüdiger, VF-Group wrote:

 Have you tried if the same happens with mod_deflate, but with one of the
 the proposed mitigations in place?

As soon as I reject/block on the range header - all is fine again.

 As said my guess is that this might be an issue with mod_deflate that
 is unrelated to the Range request issue.

Good point.

Dw

Re: CVE-2011-3192: Range header DoS vulnerability in Apache 1.3 and Apache 2 (NEAR FINAL DRAFT-4)

2011-08-24 Thread Dirk-WIllem van Gulik

On 24 Aug 2011, at 14:07, Dirk-WIllem van Gulik wrote:

*   Folks - do we also need to add Request-Range ?

*   Updated with Rudigers comments., Eric, Florians, Marks.

*   Consensus that the deflate stuff needs to go out reflected.

*   More Comments please. Esp. on the quality and realisticness of the 
mitigtions.

*   Timeline mentioning fine (we've never done that before) -- or best 
avoided ?

My plan is to wait for the US to fully wake up - and then call around 1500Z for 
a few quick +1's to get this out - ideally before 1600 zulu.

Thanks,

Dw.







Title:  CVE-2011-3192: Range header DoS vulnerability in Apache 1.3 and 
Apache 2
Date:   20110824 1600Z
# Last Updated:  20110824 1600Z
Product:   Apache Web Server
Versions:  Apache 1.3 all versions, Apache 2 all versions

Description:


A denial of service vulnerability has been found in the way the multiple 
overlapping ranges are handled by the Apache HTTPD server 
(http://seclists.org/fulldisclosure/2011/Aug/175).  

An attack tool is circulating in the wild. Active use of this tools has been 
observed.

The attack can be done remotely and with a modest number of requests leads to 
very significant memory and CPU usage. 

The default Apache HTTPD installation is vulnerable.

There is currently no patch/new version of Apache HTTPD which fixes this 
vulnerability. This advisory will be updated when a long term fix is available. 
A full fix is expected in the next 48 hours. 

Mitigation:


However there are several immediate options to mitigate this issue until that 
time:

1)  Use mod_headers to dis-allow the use of Range headers:

RequestHeader unset Range 

Note that this may break certain clients - such as those used for
e-Readers and progressive/http-streaming video.

2)  Use mod_rewrite to limit the number of ranges:

RewriteCond %{HTTP:range} !^bytes=[^,]+(,[^,]+){0,4}$
RewriteRule .* - [F]

3)  Limit the size of the request field to a few hundred bytes. Note that 
while this
keeps the offending Range header short - it may break other headers; 
such as sizable
cookies or security fields. 

LimitRequestFieldSize 200

Note that as the attack evolves in the field you are likely to have
to further limit this and/or impose other LimitRequestFields limits.

See:
http://httpd.apache.org/docs/2.2/mod/core.html#limitrequestfieldsize

4)  Deploy a Range header count module as a temporary stopgap measure:

http://people.apache.org/~dirkx/mod_rangecnt.c

5)  Apply any of the current patches under discussion - such as:


http://mail-archives.apache.org/mod_mbox/httpd-dev/201108.mbox/%3ccaapsnn2po-d-c4nqt_tes2rrwizr7urefhtkpwbc1b+k1dq...@mail.gmail.com%3e


Actions:
---
Apache HTTPD users who are concerned about a DoS attack against their server 
should consider implementing any of the above mitigations immediately. 

When using a third party attack tool to verify vulnerability - know that most 
of the versions in the wild currently check for the presence of mod_deflate; 
and will (mis)report that your server is not vulnerable if this module is not 
present. This vulnerability is not dependent on presence or absence of that 
module.

Planning:
-

This advisory will be updated when a fix/patch or new release is available. A 
patch or new apache release for Apache 2.0 and 2.2 is expected in the next 48 
hours. Note that, while popular, Apache 1.3 is deprecated. 











VOTES please -- CVE-2011-3192: Range header DoS vulnerability in Apache 1.3 and Apache 2 (Final-5)

2011-08-24 Thread Dirk-Willem van Gulik
Folks,

Can I have a few +1's on below - or feedback on what we'd like to have changed ?

*   Would like to get this out in an hour or so ?

*   FIne with the 48 hours commitment of an update ?

Dw.



Title:CVE-2011-3192: Range header DoS vulnerability Apache HTTPD 1.3/2.x
Date: 20110824 1600Z
Product:  Apache HTTPD Web Server
Versions: Apache 1.3 all versions, Apache 2 all versions

Description:


A denial of service vulnerability has been found in the way the multiple 
overlapping ranges are handled by the Apache HTTPD server:

 http://seclists.org/fulldisclosure/2011/Aug/175  

An attack tool is circulating in the wild. Active use of this tools has been 
observed.

The attack can be done remotely and with a modest number of requests can cause 
very significant memory and CPU usage on the server. 

The default Apache HTTPD installation is vulnerable.

There is currently no patch/new version of Apache HTTPD which fixes this 
vulnerability. This advisory will be updated when a long term fix is available. 

A full fix is expected in the next 48 hours. 

Mitigation:


However there are several immediate options to mitigate this issue until that 
time:

1) Use mod_rewrite to limit the number of ranges:

   Option 1L
  RewriteCond %{HTTP:range} !(^bytes=[^,]+(,[^,]+){0,4}$|^$)
  RewriteRule .* - [F]

   Option 2:
  SetEnvIf Range (,.*?){5,} bad-range=1
  RequestHeader unset Range env=bad-range
  # optional logging.
  CustomLog logs/range.log %r %{Range}i %{bad-range}e

2) Limit the size of the request field to a few hundred bytes. Note that while 
this
   keeps the offending Range header short - it may break other headers; such as 
   sizeable cookies or security fields. 

  LimitRequestFieldSize 200

   Note that as the attack evolves in the field you are likely to have
   to further limit this and/or impose other LimitRequestFields limits.

   See:  
http://httpd.apache.org/docs/2.2/mod/core.html#limitrequestfieldsize

3) Use mod_headers to dis-allow the use of Range headers:

  RequestHeader unset Range 

   Note that this may break certain clients - such as those used for
   e-Readers and progressive/http-streaming video.

4) Deploy a Range header count module as a temporary stopgap measure:

 http://people.apache.org/~dirkx/mod_rangecnt.c

5) Apply any of the current patches under discussion - such as:

   
http://mail-archives.apache.org/mod_mbox/httpd-dev/201108.mbox/%3ccaapsnn2po-d-c4nqt_tes2rrwizr7urefhtkpwbc1b+k1dq...@mail.gmail.com%3e

Actions:
---
Apache HTTPD users who are concerned about a DoS attack against their server 
should consider implementing any of the above mitigations immediately. 

When using a third party attack tool to verify vulnerability - know that most 
of the versions in the wild currently check for the presence of mod_deflate; 
and will (mis)report that your server is not vulnerable if this module is not 
present. This vulnerability is not dependent on presence or absence of that 
module.

Planning:
-
This advisory will be updated when a fix/patch or new release is available. A 
patch or new apache release for Apache 2.0 and 2.2 is expected in the next 48 
hours. Note that, while popular, Apache 1.3 is deprecated.




Re: CVE-2011-3192: Range header DoS vulnerability in Apache 1.3 and Apache 2 (NEAR FINAL DRAFT-4)

2011-08-24 Thread Dirk-Willem van Gulik

On 24 Aug 2011, at 15:34, Guenter Knauf wrote:

 can you please apply:
 --- mod_rangecnt.c.orig   Wed Aug 24 16:25:34 2011
 +++ mod_rangecnt.cWed Aug 24 15:26:48 2011

Done.

 which I need on NetWare in order to get ap_hook_post_read_request() proto;
 
 and maybe we should also add links to mod_rangecnt binaries?
 for Netware:
 http://people.apache.org/~fuankg/httpd/apache_2.2.x-mod_rangecnt.zip
 http://people.apache.org/~fuankg/httpd/apache_2.0.x-mod_rangecnt.zip


Added pointers.

Dw.



Re: VOTES please -- CVE-2011-3192: Range header DoS vulnerability in Apache 1.3 and Apache 2 (Final-6)

2011-08-24 Thread Dirk-Willem van Gulik
Various suggest on-list and off-list fixes applied. Thanks all.

A few more +1's would be nice :)

Dw.





Title:CVE-2011-3192: Range header DoS vulnerability Apache HTTPD 1.3/2.x
  Apache HTTPD Security ADVISORY

Date: 20110824 1600Z
Product:  Apache HTTPD Web Server
Versions: Apache 1.3 all versions, Apache 2 all versions

Description:


A denial of service vulnerability has been found in the way the multiple 
overlapping ranges are handled by the Apache HTTPD server:

 http://seclists.org/fulldisclosure/2011/Aug/175  

An attack tool is circulating in the wild. Active use of this tools has been 
observed.

The attack can be done remotely and with a modest number of requests can cause 
very significant memory and CPU usage on the server. 

The default Apache HTTPD installation is vulnerable.

There is currently no patch/new version of Apache HTTPD which fixes this 
vulnerability. This advisory will be updated when a long term fix is available. 

A full fix is expected in the next 48 hours. 

Mitigation:


However there are several immediate options to mitigate this issue until that 
time. 

1) Use mod_rewrite to limit the number of ranges:

   Option 1:
  # drop Range header when more than 5 ranges.
  # CVE-2011-3192
  SetEnvIf Range (,.*?){5,} bad-range=1
  RequestHeader unset Range env=bad-range

  # optional logging.
  CustomLog logs/range-CVE-2011-3192.log common env=bad-range

   Option 2:
  # Reject request when more than 5 ranges in the Range: header.
  # CVE-2011-3192
  #
  RewriteCond %{HTTP:range} !(^bytes=[^,]+(,[^,]+){0,4}$|^$)
  RewriteRule .* - [F]

   The number 5 is arbitrary. Several 10's should not be an issue and may be
   required for sites which for example serve PDFs to very high end eReaders
   or use things such complex http based video streaming.

2) Limit the size of the request field to a few hundred bytes. Note that while 
this
   keeps the offending Range header short - it may break other headers; such as 
   sizeable cookies or security fields. 

  LimitRequestFieldSize 200

   Note that as the attack evolves in the field you are likely to have
   to further limit this and/or impose other LimitRequestFields limits.

   See: http://httpd.apache.org/docs/2.2/mod/core.html#limitrequestfieldsize

3) Use mod_headers to completely dis-allow the use of Range headers:

  RequestHeader unset Range 

   Note that this may break certain clients - such as those used for
   e-Readers and progressive/http-streaming video.

4) Deploy a Range header count module as a temporary stopgap measure:

 http://people.apache.org/~dirkx/mod_rangecnt.c

   Precompiled binaries for some platforms are available at:

http://people.apache.org/~dirkx/BINARIES.txt

5) Apply any of the current patches under discussion - such as:

   
http://mail-archives.apache.org/mod_mbox/httpd-dev/201108.mbox/%3ccaapsnn2po-d-c4nqt_tes2rrwizr7urefhtkpwbc1b+k1dq...@mail.gmail.com%3e

Actions:
---
Apache HTTPD users who are concerned about a DoS attack against their server 
should consider implementing any of the above mitigations immediately. 

When using a third party attack tool to verify vulnerability - know that most 
of the versions in the wild currently check for the presence of mod_deflate; 
and will (mis)report that your server is not vulnerable if this module is not 
present. This vulnerability is not dependent on presence or absence of that 
module.

Planning:
-
This advisory will be updated when new information, a patch or a new release is 
available. A patch or new apache release for Apache 2.0 and 2.2 is expected in 
the next 48 hours. Note that, while popular, Apache 1.3 is deprecated.



Final draft / CVE-2011-3192

2011-08-24 Thread Dirk-Willem van Gulik
Thanks for all the help. All fixes included. Below will go out to announce at 
the top of the hour - unless I see a veto.

Dw.




Title:CVE-2011-3192: Range header DoS vulnerability Apache HTTPD 1.3/2.x
  Apache HTTPD Security ADVISORY

Date: 20110824 1600Z
Product:  Apache HTTPD Web Server
Versions: Apache 1.3 all versions, Apache 2 all versions

Description:


A denial of service vulnerability has been found in the way the multiple 
overlapping ranges are handled by the Apache HTTPD server:

 http://seclists.org/fulldisclosure/2011/Aug/175 

An attack tool is circulating in the wild. Active use of this tools has 
been observed.

The attack can be done remotely and with a modest number of requests can 
cause very significant memory and CPU usage on the server. 

The default Apache HTTPD installation is vulnerable.

There is currently no patch/new version of Apache HTTPD which fixes this 
vulnerability. This advisory will be updated when a long term fix 
is available. 

A full fix is expected in the next 48 hours. 

Mitigation:


However there are several immediate options to mitigate this issue until 
that time. 

1) Use SetEnvIf or mod_rewrite to detect a large number of ranges and then
   either ignore the Range: header or reject the request.

   Option 1: (Apache 2.0 and 2.2)

  # drop Range header when more than 5 ranges.
  # CVE-2011-3192
  SetEnvIf Range (,.*?){5,} bad-range=1
  RequestHeader unset Range env=bad-range

  # optional logging.
  CustomLog logs/range-CVE-2011-3192.log common env=bad-range

   Option 2: (Also for Apache 1.3)

  # Reject request when more than 5 ranges in the Range: header.
  # CVE-2011-3192
  #
  RewriteEngine on
  RewriteCond %{HTTP:range} !(^bytes=[^,]+(,[^,]+){0,4}$|^$)
  RewriteRule .* - [F]

   The number 5 is arbitrary. Several 10's should not be an issue and may be
   required for sites which for example serve PDFs to very high end eReaders
   or use things such complex http based video streaming.

2) Limit the size of the request field to a few hundred bytes. Note that while 
   this keeps the offending Range header short - it may break other headers; 
   such as sizeable cookies or security fields. 

  LimitRequestFieldSize 200

   Note that as the attack evolves in the field you are likely to have
   to further limit this and/or impose other LimitRequestFields limits.

   See: http://httpd.apache.org/docs/2.2/mod/core.html#limitrequestfieldsize

3) Use mod_headers to completely dis-allow the use of Range headers:

  RequestHeader unset Range 

   Note that this may break certain clients - such as those used for
   e-Readers and progressive/http-streaming video.

4) Deploy a Range header count module as a temporary stopgap measure:

 http://people.apache.org/~dirkx/mod_rangecnt.c

   Precompiled binaries for some platforms are available at:

http://people.apache.org/~dirkx/BINARIES.txt

5) Apply any of the current patches under discussion - such as:

   
http://mail-archives.apache.org/mod_mbox/httpd-dev/201108.mbox/%3ccaapsnn2po-d-c4nqt_tes2rrwizr7urefhtkpwbc1b+k1dq...@mail.gmail.com%3e

Actions:


However there are several immediate options to mitigate this issue until 
that time. 

1) Use SetEnvIf or mod_rewrite to detect a large number of ranges and then
   either ignore the Range: header or reject the request.

   Option 1: (Apache 2.0 and 2.2)

  # drop Range header when more than 5 ranges.
  # CVE-2011-3192
  SetEnvIf Range (,.*?){5,} bad-range=1
  RequestHeader unset Range env=bad-range

  # optional logging.
  CustomLog logs/range-CVE-2011-3192.log common env=bad-range

   Option 2: (Also for Apache 1.3)

  # Reject request when more than 5 ranges in the Range: header.
  # CVE-2011-3192
  #
  RewriteEngine on
  RewriteCond %{HTTP:range} !(^bytes=[^,]+(,[^,]+){0,4}$|^$)
  RewriteRule .* - [F]

   The number 5 is arbitrary. Several 10's should not be an issue and may be
   required for sites which for example serve PDFs to very high end eReaders
   or use things such complex http based video streaming.

2) Limit the size of the request field to a few hundred bytes. Note that while 
   this keeps the offending Range header short - it may break other headers; 
   such as sizeable cookies or security fields. 

  LimitRequestFieldSize 200

   Note that as the attack evolves in the field you are likely to have
   to further limit this and/or impose other LimitRequestFields limits.

   See: http://httpd.apache.org/docs/2.2/mod/core.html#limitrequestfieldsize

3) Use mod_headers to completely dis-allow the use of Range headers:

  RequestHeader unset Range 

   Note that this may break certain clients - such as those used for
   e-Readers and progressive/http-streaming video.


Re: DoS with mod_deflate range requests

2011-08-24 Thread Dirk-Willem van Gulik

On 24 Aug 2011, at 16:35, Tim Bannister wrote:

 On Tue, Aug 23, 2011, Roy T. Fielding wrote:
 And the spec says ...
 
   When a client requests multiple ranges in one request, the
   server SHOULD return them in the order that they appeared in the
   request.
 
 My suggestion is to reject any request with overlapping ranges or more
 than five ranges with a 416, and to send 200 for any request with 4-5
 ranges.  There is simply no need to support random access in HTTP.
 
 Deshpande  Zeng in http://dx.doi.org/10.1145/500141.500197 describe a
 method for streaming JPEG 2000 documents over HTTP, using many more than
 5 ranges in a single request.
 A client that knows about any server-side limit could make multiple
 requests each with a small number of ranges, but discovering that limit
 will add latency and take more code.

Agreed - I've seen many 10's of ranges in a single request for things like 
clever PDF pagination (or tiny TIFF quicklooks for the pages), clever http fake 
streaming and clever use of jumping to i-Frames.

I think we just need to sit this out - and accept up to RequestFieldSize limit 
bytes on that line - and then do a sort  merge overlaps as needed. 

And then it is fine.

Dw

Re: Final draft / CVE-2011-3192

2011-08-24 Thread Dirk-WIllem van Gulik
That is fine - we can do another update tomorrow, say noon zulu - if we expect 
that we do not have a proper patch and/or a 2.0.65 / 2.2.20 in the day 
following.

Weird though - my 2.0.61 and 64 does seem fine. So probably very early 2.0 
series.

Dw

On 24 Aug 2011, at 20:40, Eric Covener wrote:

 I'm seeing Apache 2.0 doesn't accept our RequestHeader syntax due to a
 defect, it misinterprets it as a value and fails startup.
 
 If we have the opportunity to amend, I think we need to suggest the
 rewrite flavor for Apache 2.0 and earlier, not just 1.3 and earlier.
 
 Also for 1.3, is our RE safe for non-PCRE? And should we reconsider
 the 5 for something more liberal?
 
   Option 1: (Apache 2.0 and 2.2)
 
  # drop Range header when more than 5 ranges.
  # CVE-2011-3192
  SetEnvIf Range (,.*?){5,} bad-range=1
  RequestHeader unset Range env=bad-range
 
  # optional logging.
  CustomLog logs/range-CVE-2011-3192.log common env=bad-range
 
   Option 2: (Also for Apache 1.3)
 
  # Reject request when more than 5 ranges in the Range: header.
  # CVE-2011-3192
  #
  RewriteEngine on
  RewriteCond %{HTTP:range} !(^bytes=[^,]+(,[^,]+){0,4}$|^$)
  RewriteRule .* - [F]
 



Re: DoS with mod_deflate range requests

2011-08-24 Thread Dirk-WIllem van Gulik

On 24 Aug 2011, at 21:39, Greg Ames wrote:

 On Wed, Aug 24, 2011 at 3:19 PM, Jim Jagielski j...@jagunet.com wrote:
 
 
  If we only merge adjacent ascending ranges, then it seems like an attacker 
  could just craft a header where the ranges jump around and dodge our fix.
 
 
 I think no matter what, we should still have some sort of
 upper limit on the number of range-sets we accept… after all,
 merge doesn't prevent jumping around ;)
 
 
 The problem I have with the upper limit on the number of range sets is the 
 use case someone posted for JPEG2000 streaming.  That has a lot of range sets 
 but is completely legit.  However, the ranges are in ascending order and 
 don't overlap.  Maybe we could count overlaps and/or non-ascending order 
 ranges and fall back to 200 + the whole object if it exceeds a limit.

Right - and the other two use cases in the wild are

-   PDF readers - which fetch something at the start in RQ 1 and then the 
index form the end - and then quick looks images for each page and title pages. 
I've seen them do a second and 3rd request with many 10's of ranges.

-   Some of the streaming video (semi/pro) video editors - which fetch a 
bunch of i-Frames and do clever skip over stuff. Not in the high tens; but 
10-15 ranges common.

-   Likewise for very clever MXF professional editing equipment - the 
largest case (yup - it did crash my server) tried to fetch over 2000 ranges :)

So I think we really should endeavor to allow 50 to a few 100 of them. Or at 
the very least - make it a config option to cut off below 50 or so.

Dw.

SSLRenegBufferSize

2011-05-03 Thread Dirk-Willem van Gulik
Can anyone remember why SSLRenegBufferSize is set at 128k (131072 bytes) 
currently by default ? 

And if that is just an accidental default - or if deep thought has gone into it 
? 

And what are the specific things which are likely to break if it is set 
significantly smaller* ?

Thanks,

Dw.

*: I am still looking at some very rare fringe cases (which back in July I 
expected to having to do with a hole in stacking) - where part of the (SSL) 
stack gets unhappy and somehow overloaded and then may allow security bypassing 
- which I am only seeing a few times a month  -and only when there is a high 
load of users with a long RTT (prolly unrelated).


Bug #30865 -- mod_disk_cache leaves many temporary files slowing file system

2011-02-27 Thread Dirk-Willem van Gulik
Reudiger,

Why is:

https://issues.apache.org/bugzilla/show_bug.cgi?id=30865

still open ? You are not sure it was fixed ? Or we just forgot about it ?

Thanks,

Dw.


Re: FOSDEM

2011-02-01 Thread Dirk-WIllem van Gulik

On 1 Feb 2011, at 22:17, Jorge Schrauwen wrote:

 Any httpd people coming to FOSDEM in Brussels, Belgium this weekend?

+1 for me.

Dw


Re: Any standard on picking a response status out of several possible?

2010-11-10 Thread Dirk-Willem van Gulik

On 10 Nov 2010, at 14:42, Dan Poirier wrote:

 If multiple response statuses would be valid for a request (e.g. 403,
 413), is there any standard on how to pick one?  I looked through RFC
 2616 but didn't see anything.  Or is it just an implementation detail?

This was subject of a fair bit of debate at the time; and from what I recall it 
was tried to reduce this to a non issue (i.e. letting the implementor make the 
call) by fairly carefully having the 2x/3x and 4x meaning right (success, retry 
with extra info and no-go) - along with making sure that the higher numbers are 
increasingly more narrow in their scope - with the assumption that implementors 
would only use those if the semantics where truly applicable and 'actionable' 
by a client.

Guess that does not answer the question - but in your example 403 (forbidden) 
vs 413 (too large) - one probably wants to ask the question - is there anything 
the client can do (perhaps change some POST/form parameter to reduce the 
requested size) - or is it purely forbidden because if its size. If it is the 
first I'd do a 413 (so smarter clients can reduce what they get) and 
distinguish it form a 403 lost case - while if it is the latter - a 403 will be 
more meaningful to a wider range of clients.

Hope that helps,

Dw

Re: Log file rotation patch

2010-11-05 Thread Dirk-Willem van Gulik

On 4 Nov 2010, at 21:26, Brian J. France wrote:

 With the current patch, see link below, it changes the syntax to ErrorLog to 
 this:
 
  ErrorLog file-path|syslog[:facility] [rotating[:interval]]

Nice!

 There is one security issue that people may have a problem with in that the 
 directory path for the log file has to be writeable by the User that apache 
 drops privilege to.  This is because all the children will need to re-open 
 the log file and the first one will create it.

That is a pretty big eek. Wondering if we need a logging child - but then one 
would end up with the rotatelog utility again :)

Dw.

smime.p7s
Description: S/MIME cryptographic signature


Re: [PATCH] mod_cgi: Mitigating some header injections by dropping invalid headers?

2010-10-12 Thread Dirk-Willem van Gulik

On 12 Oct 2010, at 15:30, Malte S. Stretz wrote:

 I had a quick look at the Apache source and the solution was simple:  Just 
 drop headers which contain any character outside the range [a-zA-Z0-9-].  
 The patch against trunk is attached.

This made me think of something we had a while ago; and after checking the logs 
- big +1 from me!

Dw.

Re: Apache-Benchmark: Non cumulative values

2010-08-31 Thread Dirk-Willem van Gulik

On 31 Aug 2010, at 16:01, Samuel ROZE wrote:

 I'm using Apache-Benchmark to stats my applications performances. I
 want to create a graph, using GNUPlot and Apache Benchmark.
 The -g option create a file that I can load in GNUPlot. It works
 very well but requests times are cumulative values:

And the third field, labelled dtime, is not what you want ?

Dw.


Re: slowloris mitigation

2010-04-15 Thread Dirk-Willem van Gulik

On 14 Apr 2010, at 22:46, Nick Kew wrote:
 
 Since then Stefan has given us mod_reqtimeout, which offers
 an alternative defence, and a more satisfactory approach.
 ..
 So what should we do with mod_noloris?
 (b) Keep it in trunk for the interested but keep it
out of released versions.

I would not mind to keep it in /trunk/ for a while longer - until this while 
class is more or less put to rest. Found it a useful starting point to deal 
with adhoc/special issues.

Dw.

  1   2   3   4   >