Re: Mitigating the Slowloris DoS attack

2009-06-24 Thread Dirk-Willem van Gulik

Akins, Brian wrote:

On 6/22/09 10:40 PM, Weibin Yaonbubi...@gmail.com  wrote:


I have an idea to mitigate the problem: put the Nginx as a reverse proxy
server in the front of apache.


Or a device that effectively acts as such.

So what we did in the mid '90 when we where hit by pretty much the same 
was a bit simpler - any client which did not complete its headers within 
a a few seconds (or whatever a SLIP connection over a few k baud or so 
would need) was simply handed off by passing the file descriptor over a 
socket to a special single apache process. This one did a very single 
threaded async simple select() loop for all the laggards and would only 
pass it back to the main apache children once header reading was 
complete. This was later replaced by kernel accept filters.


Thanks,

Dw.


Re: Mitigating the Slowloris DoS attack

2009-06-24 Thread Graham Leggett
Dirk-Willem van Gulik wrote:

 So what we did in the mid '90 when we where hit by pretty much the same
 was a bit simpler - any client which did not complete its headers within
 a a few seconds (or whatever a SLIP connection over a few k baud or so
 would need) was simply handed off by passing the file descriptor over a
 socket to a special single apache process. This one did a very single
 threaded async simple select() loop for all the laggards and would only
 pass it back to the main apache children once header reading was
 complete. This was later replaced by kernel accept filters.

Are kernel accept filters widespread enough for it to be reasonably
considered a generic solution to the problem? If so, then the solution
to this problem is to just configure them correctly, and you're done.

Regards,
Graham
--


smime.p7s
Description: S/MIME Cryptographic Signature


Re: Mitigating the Slowloris DoS attack

2009-06-24 Thread Plüm, Rüdiger, VF-Group
 

 -Ursprüngliche Nachricht-
 Von: Graham Leggett 
 Gesendet: Mittwoch, 24. Juni 2009 10:05
 An: dev@httpd.apache.org
 Betreff: Re: Mitigating the Slowloris DoS attack
 
 Dirk-Willem van Gulik wrote:
 
  So what we did in the mid '90 when we where hit by pretty 
 much the same
  was a bit simpler - any client which did not complete its 
 headers within
  a a few seconds (or whatever a SLIP connection over a few k 
 baud or so
  would need) was simply handed off by passing the file 
 descriptor over a
  socket to a special single apache process. This one did a 
 very single
  threaded async simple select() loop for all the laggards 
 and would only
  pass it back to the main apache children once header reading was
  complete. This was later replaced by kernel accept filters.
 
 Are kernel accept filters widespread enough for it to be reasonably
 considered a generic solution to the problem? If so, then the solution
 to this problem is to just configure them correctly, and you're done.

The following issues remain:

1. You only have them on the BSD platforms
2. It doesn't help with SSL.
3. These kind of attacks can be also done in phases after the headers are
   read.

Curious question as I am not that familar with the accept filters:

Do they really wait with the handover of the socket until they read all headers?
I thought they only read the first line of the request before handing over
the socket to the app.

Regards

Rüdiger


Re: Mitigating the Slowloris DoS attack

2009-06-24 Thread Joe Orton
On Mon, Jun 22, 2009 at 09:48:46PM -0700, Paul Querna wrote:
 On Sun, Jun 21, 2009 at 4:10 AM, Andreas Krennmaira...@synflood.at wrote:
  Hello everyone,
 .
  The basic principle is that the timeout for new connections is adjusted
  according to the current load on the Apache instance: a load percentage is
  computed in the perform_idle_server_maintenance() routine and made available
  through the global scoreboard. Whenever the timeout is set, the current load
  percentage is taken into account. The result is that slowly sending
  connections are dropped due to a timeout, while legitimate, fast-sending
  connections are still being served. While this approach doesn't completely
  fix the issue, it mitigates the negative impact of the Slowloris attack.
 
 Mitagation is the wrong approach.
 
 We all know our architecture is wrong.

Meh.  There will always be a maximum to the number of concurrent 
connections a server can handle - be that hardware, kernel, or server 
design.  If you allow a single client to establish that number of 
connections it will deny service to other clients.

That is all that slowloris does, and you will always have to mitigate 
that kind of attack at network/router/firewall level.  It can be done 
today on Linux with a single trivial iptables rule, I'm sure the same is 
true of other kernels.

The only aspect of slowloris which claims to be novel is that it has 
low bandwidth footprint and no logging/detection footprint.  To the 
former, I'm not sure that the bandwidth footprint is significantly 
different from sending legitimate single-packet HTTP requests with 
single-packet responses; to the latter, it will have a very obvious 
footprint if you are monitoring the number of responses/minute your 
server is processing.

Regardless, the only thing I've ever wanted to see changed in the server 
which would somewhat mitigate this type of attack is to have coarser 
granularity on timeouts, e.g. per-request-read, rather than simply 
per-IO-operation.  (one of the few things 1.3 did better than 2.x, 
though the *way* it did it was horrible)

Regards, Joe


Re: Mitigating the Slowloris DoS attack

2009-06-24 Thread Matthieu Estrade
The problem could happen also if a CL is sent and not enough data is
posted. So i don't think control for complete headers will solve the
entire problem. I'm actually playing with dynamic timeout considering
time between request line and first header to adapt future timeout of
the socket, but it will remain a possible attack between request line
and first incomplete header. The second possible counter mesure is to
increment a per ip counter for waiting connexions (in connection filter,
inc before ap_get_brigade, dec after getting it). If there is too much
connexion at waiting state from the same ip, the ip is blacklisted. The
aim is to differentiate waiting connexion than working connexion. A
separate thread could also check a socket list to see when was the
latest data and kill it if there is too much waiting connexion from the
same ip... But all of this will add some lock and performances issues :(

Matthieu

Graham Leggett wrote:
 Dirk-Willem van Gulik wrote:
 
 So what we did in the mid '90 when we where hit by pretty much the same
 was a bit simpler - any client which did not complete its headers within
 a a few seconds (or whatever a SLIP connection over a few k baud or so
 would need) was simply handed off by passing the file descriptor over a
 socket to a special single apache process. This one did a very single
 threaded async simple select() loop for all the laggards and would only
 pass it back to the main apache children once header reading was
 complete. This was later replaced by kernel accept filters.
 
 Are kernel accept filters widespread enough for it to be reasonably
 considered a generic solution to the problem? If so, then the solution
 to this problem is to just configure them correctly, and you're done.
 
 Regards,
 Graham
 --



Re: Mitigating the Slowloris DoS attack

2009-06-24 Thread Matthieu Estrade
I totally agree with you.

This first point the lack of tunning of httpd.conf, this kind of attack
crash default setup of httpd.conf but a well setup server is harder to
kill, specially if you have decreased timeout. With 5 seconds as timeout
and a good tuning, slowloris fail...

More granular timeout and maybe adaptative timeout is also IMHO a good
way to improve resistance to this kind of attack. 300 seconds is too
much and maybe this value could be modified in default httpd. A POST
request with a body can have far more reason to be slowed because of the
amount of data and time it takes to transfer. a simple GET request
contains only headers and should be sent in one time, no need to wait
long time here...

Matthieu

Joe Orton wrote:
 On Mon, Jun 22, 2009 at 09:48:46PM -0700, Paul Querna wrote:
 On Sun, Jun 21, 2009 at 4:10 AM, Andreas Krennmaira...@synflood.at wrote:
 Hello everyone,
 .
 The basic principle is that the timeout for new connections is adjusted
 according to the current load on the Apache instance: a load percentage is
 computed in the perform_idle_server_maintenance() routine and made available
 through the global scoreboard. Whenever the timeout is set, the current load
 percentage is taken into account. The result is that slowly sending
 connections are dropped due to a timeout, while legitimate, fast-sending
 connections are still being served. While this approach doesn't completely
 fix the issue, it mitigates the negative impact of the Slowloris attack.
 Mitagation is the wrong approach.

 We all know our architecture is wrong.
 
 Meh.  There will always be a maximum to the number of concurrent 
 connections a server can handle - be that hardware, kernel, or server 
 design.  If you allow a single client to establish that number of 
 connections it will deny service to other clients.
 
 That is all that slowloris does, and you will always have to mitigate 
 that kind of attack at network/router/firewall level.  It can be done 
 today on Linux with a single trivial iptables rule, I'm sure the same is 
 true of other kernels.
 
 The only aspect of slowloris which claims to be novel is that it has 
 low bandwidth footprint and no logging/detection footprint.  To the 
 former, I'm not sure that the bandwidth footprint is significantly 
 different from sending legitimate single-packet HTTP requests with 
 single-packet responses; to the latter, it will have a very obvious 
 footprint if you are monitoring the number of responses/minute your 
 server is processing.
 
 Regardless, the only thing I've ever wanted to see changed in the server 
 which would somewhat mitigate this type of attack is to have coarser 
 granularity on timeouts, e.g. per-request-read, rather than simply 
 per-IO-operation.  (one of the few things 1.3 did better than 2.x, 
 though the *way* it did it was horrible)
 
 Regards, Joe
 



Re: Mitigating the Slowloris DoS attack

2009-06-24 Thread Andreas Krennmair

* Joe Orton jor...@redhat.com [2009-06-24 11:20]:
Meh.  There will always be a maximum to the number of concurrent 
connections a server can handle - be that hardware, kernel, or server 
design.  If you allow a single client to establish that number of 
connections it will deny service to other clients.


That is all that slowloris does, and you will always have to mitigate 
that kind of attack at network/router/firewall level.  It can be done 
today on Linux with a single trivial iptables rule, I'm sure the same is 
true of other kernels.


I think you confuse the PoC tool with the fundamental problem. You can't fend 
off this kind of attack at TCP level, at least not in cases where the n 
connections that block Apache are made by not 1 but n hosts.


Regards,
Andreas


Re: Mitigating the Slowloris DoS attack

2009-06-24 Thread Kevin J Walters

 M == Matthieu Estrade mestr...@apache.org writes:

M More granular timeout and maybe adaptative timeout is also IMHO a good
M way to improve resistance to this kind of attack.

The current 1.3, 2.0 and 2.2 documentation is in agreement too!

I believe the ssl module also takes its timeout value from this
setting. It would be great if that was separately configurable too to
cater for those intent on doing partial ssl handshakes.


  The TimeOut directive currently defines the amount of time Apache will wait 
for three things:

   1. The total amount of time it takes to receive a GET request.
   2. The amount of time between receipt of TCP packets on a POST or PUT 
request.
   3. The amount of time between ACKs on transmissions of TCP packets in 
responses.

  We plan on making these separately configurable at some point down the
  road. The timer used to default to 1200 before 1.2, but has been
  lowered to 300 which is still far more than necessary in most
  situations. It is not set any lower by default because there may still
  be odd places in the code where the timer is not reset when a packet
  is sent. 


regards

|evin

-- 
Kevin J Walters  Morgan Stanley
k...@ms.com   25 Cabot Square
Tel: 020 7425 7886   Canary Wharf
Fax: 020 7677 8504   London E14 4QA


Re: Mitigating the Slowloris DoS attack

2009-06-24 Thread Graham Dumpleton
2009/6/24 Kevin J Walters kevin.walt...@morganstanley.com:

 M == Matthieu Estrade mestr...@apache.org writes:

 M More granular timeout and maybe adaptative timeout is also IMHO a good
 M way to improve resistance to this kind of attack.

 The current 1.3, 2.0 and 2.2 documentation is in agreement too!

 I believe the ssl module also takes its timeout value from this
 setting. It would be great if that was separately configurable too to
 cater for those intent on doing partial ssl handshakes.


  The TimeOut directive currently defines the amount of time Apache will wait 
 for three things:

   1. The total amount of time it takes to receive a GET request.
   2. The amount of time between receipt of TCP packets on a POST or PUT 
 request.
   3. The amount of time between ACKs on transmissions of TCP packets in 
 responses.

  We plan on making these separately configurable at some point down the
  road. The timer used to default to 1200 before 1.2, but has been
  lowered to 300 which is still far more than necessary in most
  situations. It is not set any lower by default because there may still
  be odd places in the code where the timer is not reset when a packet
  is sent.

From what I understand, the server timeout value is also used to break
deadlocks in mod_cgi due to POST data being greater than the UNIX
socket buffer size and CGI script not reading POST data and then
returning a response greater than the UNIX socket buffer size. In
other words, CGI script blocks because Apache server child process
isn't reading response. The Apache server child process is blocked
still waiting for the CGI script to consume the response. The timeout
value breaks the deadlock. In this context, making the timeout too
small a value may have unintended consequences and affect how CGI
scripts work, so a separate timeout for mod_cgi would be preferable.

FWIW, the mod_cgid module doesn't appear to have this deadlock
detection so in practice this issue could in itself be used as a
denial of service vector when mod_cgid is used as it will completely
lock up the Apache child server thread with no failsafe to unblock it.
I have brought this issue up before on the list to get someone else to
analyse mod_cgid code and see if what I see is correct or not, but no
one seemed interested at the time, so took it that people didn't see
it as important. It may not have been seen as such a big issue as on
Linux systems the UNIX socket buffer size in the order of 220KB. On
MacOS X though, the UNIX socket buffer size is only 8KB, so much
easier to trigger. Unlike SendBufferSize and ReceiveBufferSize, there
are no directives to override these buffer sizes for mod_cgi and
mod_cgid.

Graham


Re: Using slotmem in /mod_lbmethod_heartbeat/mod_heartmonitor

2009-06-24 Thread jean-frederic clere

Paul Querna wrote:

On Tue, Jun 23, 2009 at 5:35 AM, jean-frederic clerejfcl...@gmail.com wrote:

Hi,

I plan to use slotmem (additionally to the actual file based logic) in the
heartbeat logic.
HeartbeatStorage mem:logs/hb.dat (slotmem and key/save uses logs/hb.dat).
HeartbeatStorage logs/hb.dat (existing logic).

Of course the hearthbeat handler will use slotmem and issue en error at the
start if that is not the storage configured. (actualy the the hearthbeat
handler doesn't work).

The slotmem element will use the proxy_worker_stat and heartbeat actual
format...(Well a string big enough).

Comments?


why do we need to store the same information twice?



Not twice, I will just keep the old file logic and add a new one, the 
proxy_worker_stat would come from the slotmem not from the scoreboard.


Cheers

Jean-Frederic


Re: Please Review! Path for os/win32/os.h - new reports Win64 when build for 64-bit

2009-06-24 Thread Jorge Schrauwen
I thought I did but couldn't find it.

I've create a new bug report:
https://issues.apache.org/bugzilla/show_bug.cgi?id=47418

~Jorge


On Wed, Jun 24, 2009 at 9:40 PM, Jorge Schrauwen
jorge.schrau...@gmail.comwrote:

 I can't remember,

 I'll open one tomorrow when I have access to my updated patch.

 ~Jorge



 On Wed, Jun 24, 2009 at 7:55 PM, Mario Brandtjbl...@gmail.com wrote:
  Hi Jorge,
  did you also open a bug in bugzilla?
 
  Mario
 
  On Fri, Jun 19, 2009 at 6:30 PM, Jorge
  Schrauwenjorge.schrau...@gmail.com wrote:
  Sorry to dig up this old thread again but some users of my unofficial
 binary
  have been complaining that it still says Win32.
 
  So any chances somebody could look at this again. It's a rather trivial
  change I think.
 
  ~Jorge
 



Re: Client authentication and authorization using client certificates

2009-06-24 Thread Johannes Müller

Eric Covener wrote:

On Tue, Jun 16, 2009 at 5:40 PM, Johannes Müllerjo...@gmx.de wrote:
  

Yes should be no problem. Relicensing means I'll also have to remove current
the current
version and SVN revisions so there is no problem if someone already
downloaded the
GPLed release?



IANAL: I don't see why, they're free to use it under those terms, and
you're free to change the terms of any subsequent release (or prior
release to other parties!)

  



Released version 0.2 of mod_auth_certificate under Apache License 2.0
Download at https://sourceforge.net/projects/modauthcertific/

Any comments?

Greetings,
Johannes