Re: Ldap Authorization
On Oct 26, 2004, at 6:10 PM, Graham Leggett wrote: Ryan Morgan wrote: The mod_authnz_ldap documentation states that authorization schemes can be setup using LDAP filters. From looking at the source, that doesn't appear to be the case. (Authentication uses filters, but the authorization phase does not) I think that type of feature could be useful though. I was thinking of adding an additional directive 'require ldap-attribute name=value'. AFAIR the default attributes for "require group" can be overridden from "member" and "uniqueMember" to anything you like. You are restricted to comparing against the distinguished name of the user though. If you have a patch, open an enhancement report inside Bugzilla and upload it there so that it doesn't fall through the cracks. Extending the support for filters in the authorisation phase is a definite win. Yep, only being able to match against the user DN with 'require ldap-group' is a bit restrictive. I'll file an enhancement along with the patch. Thanks Graham! -Ryan
Re: Ldap Authorization
Ryan Morgan wrote: The mod_authnz_ldap documentation states that authorization schemes can be setup using LDAP filters. From looking at the source, that doesn't appear to be the case. (Authentication uses filters, but the authorization phase does not) I think that type of feature could be useful though. I was thinking of adding an additional directive 'require ldap-attribute name=value'. AFAIR the default attributes for "require group" can be overridden from "member" and "uniqueMember" to anything you like. You are restricted to comparing against the distinguished name of the user though. If you have a patch, open an enhancement report inside Bugzilla and upload it there so that it doesn't fall through the cracks. Extending the support for filters in the authorisation phase is a definite win. Regards, Graham -- smime.p7s Description: S/MIME Cryptographic Signature
Ldap Authorization
Hey all, The mod_authnz_ldap documentation states that authorization schemes can be setup using LDAP filters. From looking at the source, that doesn't appear to be the case. (Authentication uses filters, but the authorization phase does not) I think that type of feature could be useful though. I was thinking of adding an additional directive 'require ldap-attribute name=value'. I have a patch available if the group likes the idea. Thoughts? -Ryan
Re: Event MPM
--On Tuesday, October 26, 2004 6:55 PM -0400 Greg Ames <[EMAIL PROTECTED]> wrote: It might work fine, but we already have a MODE_EATCRLF call which ought to do the job (down to the core_input_filter anyway) as long as we remember what it returned. Are you suggesting that we replace that? Would that solve some data-stashed-in-connection-filter cases? Yes as I don't think EATCRLF is the right thing to do here. I'd suggest replacing the EATCRLF with the non-blocking speculative read and seeing how that works. The side-effect of eating CRLFs is unnecessary. It should do the same thing and would be a more general solution as well as it can be called several times and should return the same thing each time. -- justin
Re: Event MPM
Justin Erenkrantz wrote: --On Tuesday, October 26, 2004 4:56 PM -0400 Greg Ames <[EMAIL PROTECTED]> wrote: So I'm thinking that we should see MODE_EATCRLF behave differently when core_input_filter has data stashed. I would prefer a more general solution. As I hinted at before, couldn't AP_MODE_SPECULATIVE with APR_NONBLOCK_READ suffice? -- justin It might work fine, but we already have a MODE_EATCRLF call which ought to do the job (down to the core_input_filter anyway) as long as we remember what it returned. Are you suggesting that we replace that? Would that solve some data-stashed-in-connection-filter cases? Greg
Re: Event MPM
Justin Erenkrantz wrote: Connection-level filters like mod_ssl would have to be rewritten to be async. or to simply report whether they held on to any data. This is how EAGAIN return values would work. But, again, I don't think we could add it easily without changing a lot of filter semantics. -- I'd like to see some kind of stashed input data indicator come up the filter chain every time it's invoked. Then we wouldn't have to make an extra trip down the chain to find out whether we should flush or not. Of course the event mpm would benefit too :) Greg
Re: Event MPM
--On Tuesday, October 26, 2004 4:56 PM -0400 Greg Ames <[EMAIL PROTECTED]> wrote: So I'm thinking that we should see MODE_EATCRLF behave differently when core_input_filter has data stashed. I would prefer a more general solution. As I hinted at before, couldn't AP_MODE_SPECULATIVE with APR_NONBLOCK_READ suffice? -- justin
Re: Event MPM
Justin Erenkrantz wrote: --On Tuesday, October 26, 2004 4:25 PM -0400 Greg Ames that sucks IMO, but it does sounds like how the code works today. We do socket read() syscalls during the MODE_EATCRLF calls that are almost always unproductive. They could be optimized away. I don't believe 1.3 does these extra read()s. 1.3 does it in reverse: it discards the extra CRLFs at the beginning of the request lines. Hm. Looking at the code, httpd-2.x does the same thing in read_request_line. So, we could probably toss the EATCRLF mode entirely without any ill effects. It's kind of pointless... IIRC we invented MODE_PEEK (MOD_CRLF's ancestor for the newbies) so we could pack response bodies for true pipelined requests into the minimum number of network packets. The idea was bypass the network flush if there is more input data to optimize throughput. It's a great idea high level, but we burn too many cycles implementing the check in the common case. So I'm thinking that we should see MODE_EATCRLF behave differently when core_input_filter has data stashed. Greg
Re: Event MPM
--On Tuesday, October 26, 2004 4:25 PM -0400 Greg Ames <[EMAIL PROTECTED]> wrote: that sucks IMO, but it does sounds like how the code works today. We do socket read() syscalls during the MODE_EATCRLF calls that are almost always unproductive. They could be optimized away. I don't believe 1.3 does these extra read()s. 1.3 does it in reverse: it discards the extra CRLFs at the beginning of the request lines. Hm. Looking at the code, httpd-2.x does the same thing in read_request_line. So, we could probably toss the EATCRLF mode entirely without any ill effects. It's kind of pointless... Connection-level filters like mod_ssl would have to be rewritten to be async. or to simply report whether they held on to any data. This is how EAGAIN return values would work. But, again, I don't think we could add it easily without changing a lot of filter semantics. -- justin
Re: Event MPM
--On Tuesday, October 26, 2004 2:20 PM -0500 "William A. Rowe, Jr." <[EMAIL PROTECTED]> wrote: Justin, that's the purpose of a Poll bucket. If we introduce the concept, here's the scenario; whenever a 'speculative' or 'non blocking' read or write is attempted and fails, at whatever depth (the socket or the ssl filter itself), a metadata bucket comes back saying 'I got stuck - here! Wait on this fd/event.' No, that's not the problem. The problem is that some connection filter has a buffer that was read from the socket but hasn't passed that along yet. Hence, a poll bucket is worthless in this scenario. AIUI, a poll bucket would be useful to abstract the pollset API. But, I'm not sure that's helpful to solve this problem. FWIW, if we start to go down this route, to me this smells like 2.3 candidate work. This is likely going to snowball real fast into other areas and I'd really like to keep us close to seriously discussing 2.2 at AC in a few weeks instead of throwing HEAD into turmoil with these changes. I think you are oversimplifying. 3rd party modules expect they will remain glued to a thread. When a request can start to 'bounce' between threads, we will likely need to push to 3.0. I certainly don't think I'm oversimplifying. And, 2.3 could certainly yield 3.0 based on our versioning scheme: we can't go to 3.0 directly. -- justin
Re: Event MPM
Justin Erenkrantz wrote: The problem is that there is no reliable way to determine if there is more data in the input filters without actually invoking a read. that sucks IMO, but it does sounds like how the code works today. We do socket read() syscalls during the MODE_EATCRLF calls that are almost always unproductive. They could be optimized away. I don't believe 1.3 does these extra read()s. Connection-level filters like mod_ssl would have to be rewritten to be async. or to simply report whether they held on to any data. SPECULATIVE with APR_NONBLOCK_READ will come the closest to achieving the goal though. However, I expect mod_ssl isn't going to work quite right with non-blocking reads. Perhaps we could disable the use of the event thread when are filters like mod_ssl for now, and see what we can do about true pipelining for non-ssl. Greg
Re: Event MPM
At 01:59 AM 10/26/2004, Justin Erenkrantz wrote: >core_input_filter or any connection-level filter (say SSL) could be holding onto a >complete request that hasn't been processed yet. The worker thread will only process >one request and then put it back on the stack. But, there's certainly no reason why >another request isn't already in the chain ready to be read. And, the listener/event >thread will be waiting for more data to come in - but, we already read it. Oops. >(And, perhaps, it's not enough to be a complete request - so it'd block defeating the >purpose of the event thread - Oops again.) Justin, that's the purpose of a Poll bucket. If we introduce the concept, here's the scenario; whenever a 'speculative' or 'non blocking' read or write is attempted and fails, at whatever depth (the socket or the ssl filter itself), a metadata bucket comes back saying 'I got stuck - here! Wait on this fd/event.' With a little more clever magic, the thread handling that request would psh this request, and that poll event, back to the 'event manager', and yield to the event manager for another request (maybe the same request) to be processed. Someone cried wolf, b.t.w., about connection and request pool allocation being too tightly coupled to threads. They can be decoupled pretty painlessly, by tying an allocator to a single connection object. We can presume that request pools will be a subpool of each connection. Note - there are usually more connections than actual worker threads. Also note, clever httpd-2.0 tricks like mtmalloc can't work under this model. >FWIW, if we start to go down this route, to me this smells like 2.3 candidate work. >This is likely going to snowball real fast into other areas and I'd really like to >keep us close to seriously discussing 2.2 at AC in a few weeks instead of throwing >HEAD into turmoil with these changes. I think you are oversimplifying. 3rd party modules expect they will remain glued to a thread. When a request can start to 'bounce' between threads, we will likely need to push to 3.0.
Re: Event MPM
--On Tuesday, October 26, 2004 1:53 PM -0400 Greg Ames <[EMAIL PROTECTED]> wrote: right, I understand. check_pipeline_flush already tests whether the input filters hold any data if I'm not mistaken. It doesn't quite work like that. check_pipeline_flush (via the EATCRLF get_brigade call) only does anything if there are stray CRLFs after a request - it doesn't return any knowledge if there is a request pending. (In fact, EATCRLF will actually read the data from the socket into core_input_filter's buffer - so it'll directly cause the poll() to not work correctly.) For example, when mod_ssl is active, an EATCRLF call always returns ENOTIMPL. So, the check_pipeline_flush doesn't always work as expected and the EATCRLF check isn't enough to determine if there is any 'held' data. not if we modify it so that the worker thread doesn't give up the connection to the event thread when there is more data in the input filters. That's what I meant by "react appropriately". Sorry if I wasn't clear. The problem is that there is no reliable way to determine if there is more data in the input filters without actually invoking a read. Connection-level filters like mod_ssl would have to be rewritten to be async. SPECULATIVE with APR_NONBLOCK_READ will come the closest to achieving the goal though. However, I expect mod_ssl isn't going to work quite right with non-blocking reads. Trying to support both 'slow' *and* 'fast' connections I think will require changes outside of the scope of the MPM. This is why I'd prefer branching 2.3 and work on it in there: these changes are likely to snowball. Plus, this effort dovetails with trying to rethink how filters work. -- justin
Re: Event MPM
Justin Erenkrantz wrote: --On Tuesday, October 26, 2004 12:03 PM -0400 Greg Ames <[EMAIL PROTECTED]> wrote: Yes, this needs to be fixed. I don't see it as difficult problem. We already test for whether the output filters need to be flushed (i.e., is there any more data in the input filter chain) before the request ends. We just need need to remember the outcome of that test and react appropriately. This isn't about whether the output filters need to be flushed: this is about whether the connection-level input filters have already read data from the socket and are hanging onto data that needs to be processed before the client will send more data. right, I understand. check_pipeline_flush already tests whether the input filters hold any data if I'm not mistaken. The event MPM will be sitting on a poll() waiting for the client to send more data when the client already has sent the next request. not if we modify it so that the worker thread doesn't give up the connection to the event thread when there is more data in the input filters. That's what I meant by "react appropriately". Sorry if I wasn't clear. Greg
Re: Environment handling in mod_rewrite on a Windows platform
At 07:59 AM 10/26/2004, Philip wrote: >Bill, > >Did these changes make it into 1.3.32 -- I noticed a lot of changes in the log for >mod_rewrite, but not these ones. c.f. http://cvs.apache.org/viewcvs.cgi/apache-1.3/src/modules/standard/mod_rewrite.c rev 1.194. Thanks again for your patch, Bill
Re: cvs commit: httpd-2.0/server protocol.c
>> You MUST have SOMETHING that knows the difference >> or you don't have DOS protection. >> >> Also... if you wait all the way until you have a 'log' entry for >> a DOS in progress then you haven't achieved the goal >> of sensing them 'at the front door'. > > I don't set myself that goal. I agree that it's the best place > to detect a DoS but it's often not possible for various reasons. > With that option not available I prefer to be able to detect > DoS attacks anywhere I can. Roger that. >> What I was suggesting is some kind of 'connection' based >> filter that has all the well-known DOS attack scheme >> algorithms in place and can 'sense' when they are happening >> before the Server gets overloaded. > > That does not need to be in web server at all. It can > work from within the kernel, or be a part of a network > gateway. Double Roger That Yours... Kevin Kiley
Re: Event MPM
--On Tuesday, October 26, 2004 12:03 PM -0400 Greg Ames <[EMAIL PROTECTED]> wrote: Yes, this needs to be fixed. I don't see it as difficult problem. We already test for whether the output filters need to be flushed (i.e., is there any more data in the input filter chain) before the request ends. We just need need to remember the outcome of that test and react appropriately. This isn't about whether the output filters need to be flushed: this is about whether the connection-level input filters have already read data from the socket and are hanging onto data that needs to be processed before the client will send more data. The event MPM will be sitting on a poll() waiting for the client to send more data when the client already has sent the next request. Are there a lot of browsers out there that implement true pipelining? I know it's in the RFC for good reasons but I don't believe I've ever seen it in the wild. Mozilla, Opera, Squid, etc. I'm, uh, absolutely against any code being checked in that breaks HTTP pipelining. (You should get the drift of where I'm going.) umm, I disagree. I'm not too concerned about worker threads blocking in socket reads occasionally. The point here is to go after the low hanging fruit (frequent long delays between requests) without massive code churn in the server. Certainly not at the expense of HTTP/1.1-compliance. -- justin
Re: cvs commit: httpd-2.0/server protocol.c
[EMAIL PROTECTED] wrote: > You MUST have SOMETHING that knows the difference > or you don't have DOS protection. > > Also... if you wait all the way until you have a 'log' entry for > a DOS in progress then you haven't achieved the goal > of sensing them 'at the front door'. I don't set myself that goal. I agree that it's the best place to detect a DoS but it's often not possible for various reasons. With that option not available I prefer to be able to detect DoS attacks anywhere I can. > What I was suggesting is some kind of 'connection' based > filter that has all the well-known DOS attack scheme > algorithms in place and can 'sense' when they are happening > before the Server gets overloaded. That does not need to be in web server at all. It can work from within the kernel, or be a part of a network gateway. -- ModSecurity (http://www.modsecurity.org) [ Open source IDS for Web applications ]
Re: Event MPM
Justin Erenkrantz wrote: this MPM breaks any pipelined connections because there can be a deadlock. core_input_filter or any connection-level filter (say SSL) could be holding onto a complete request that hasn't been processed yet. The worker thread will only process one request and then put it back on the stack. But, there's certainly no reason why another request isn't already in the chain ready to be read. And, the listener/event thread will be waiting for more data to come in - but, we already read it. Oops. Yes, this needs to be fixed. I don't see it as difficult problem. We already test for whether the output filters need to be flushed (i.e., is there any more data in the input filter chain) before the request ends. We just need need to remember the outcome of that test and react appropriately. Are there a lot of browsers out there that implement true pipelining? I know it's in the RFC for good reasons but I don't believe I've ever seen it in the wild. (And, perhaps, it's not enough to be a complete request - so it'd block defeating the purpose of the event thread umm, I disagree. I'm not too concerned about worker threads blocking in socket reads occasionally. The point here is to go after the low hanging fruit (frequent long delays between requests) without massive code churn in the server. Greg
Re: cvs commit: httpd-2.0/server protocol.c
>> In the case you just mentioned... it is going to take >> a special 'filter' to 'sense' that a possible DOS >> attack is in progress. Just fair amounts of 'dataless' >> connection requests from one or a small number of orgins >> doesn't qualify. There are plenty of official >> algorithms around now to 'sense' most of these >> brute force attacks and ( only then ) pop you an >> 'alert' or something. >> >> Just relying on a gazillion entries in a log file isn't >> the right way to 'officially' distinguish a DOS attack >> from just ( as Roy says ) 'life on the Internet'. > > Sure, you may need to have some logic to determine what makes > an attack and what not, but you must have the log entry to > begin with so you feed it to the algorithm. Respectfully disagree. There is no 'may' about it. You MUST have SOMETHING that knows the difference or you don't have DOS protection. Also... if you wait all the way until you have a 'log' entry for a DOS in progress then you haven't achieved the goal of sensing them 'at the front door'. What I was suggesting is some kind of 'connection' based filter that has all the well-known DOS attack scheme algorithms in place and can 'sense' when they are happening before the Server gets overloaded. Once the DOS protection kicks in... you don't get any 'log' entries at all... the goal is to prevent the connections from ever turning into 'requests' that the Server has to waste time processing. It's your only chance to survive a real DOS attack. Yours... Kevin Kiley In a message dated 10/26/2004 8:50:11 AM Central Daylight Time, [EMAIL PROTECTED] writes: > In the case you just mentioned... it is going to take > a special 'filter' to 'sense' that a possible DOS > attack is in progress. Just fair amounts of 'dataless' > connection requests from one or a small number of orgins > doesn't qualify. There are plenty of official > algorithms around now to 'sense' most of these > brute force attacks and ( only then ) pop you an > 'alert' or something. > > Just relying on a gazillion entries in a log file isn't > the right way to 'officially' distinguish a DOS attack > from just ( as Roy says ) 'life on the Internet'. Sure, you may need to have some logic to determine what makes an attack and what not, but you must have the log entry to begin with so you feed it to the algorithm.
Re: Event MPM
--On Tuesday, October 26, 2004 11:24 AM -0400 Greg Ames <[EMAIL PROTECTED]> wrote: Does flood allow multiple connections per thread or per process? Ideally the load simulator would scale as well as the server, although that's not strictly necessary. Eventually I want to run at least as many connections as Colm has - 20k last I heard. The threads are fairly lightweight. But, remember, that you can run flood on multiple machines. ;-) -- justin
Re: cvs commit: httpd-2.0/server protocol.c
Jeff Trawick wrote: > On Tue, 26 Oct 2004 14:51:59 +0100, Ivan Ristic <[EMAIL PROTECTED]> wrote: > >> Sure, you may need to have some logic to determine what makes >> an attack and what not, but you must have the log entry to >> begin with so you feed it to the algorithm. > > Something I'm still curious about: Was the logging with Apache 1.3 not > sufficient (logging only for the timeout error)? It still seems that > Apache 2 is going to be logging more than Apache 1.3, which is > something that deserves a bit of scrutiny. Logging timeouts only is fine by me. Actually, I didn't realize the patch would cause logging in other cases. I mean, I am not saying that message alone will resolve all possible abuse scenarios but it certainly helps significantly. I hope to spend some time in the near future testing various DoS scenarios. You'll hear from me if there's anything interesting to tell. -- ModSecurity (http://www.modsecurity.org) [ Open source IDS for Web applications ]
Re: Event MPM
Justin Erenkrantz wrote: Um, flood already lets you do this and lots more. Does flood allow multiple connections per thread or per process? Ideally the load simulator would scale as well as the server, although that's not strictly necessary. Eventually I want to run at least as many connections as Colm has - 20k last I heard. Greg
Re: Event MPM w/ multiple processes
Greg Ames wrote: However, the big thing it doesn't use is accept serialization. hmmm, that would be challenging with a merged listener/event thread. If the event thread is blocked waiting for its turn to accept(), it can't react to a poll popping due to an older connection becoming readable. Yup. I am thinking about different ways of passing the listening sockets around. Both EPoll and KQueue support methods to cheaply disable an FD in their pollset. It just needs exposure in the APR API. This means all event threads are listening for incoming clients. The first one to process the incoming connection gets it. This does not block the other event threads, since they set the listening socket to non-blocking before starting their loop. > This seems to work fine on my tests. It has the sucky side effect of waking up threads sometimes when they are not needed, but on a busy server, trying to accept() will likely be fine, as there will be a backlog of clients to accept(). short war story: we had a bug a couple of years ago where whenever we tried putting the latest httpd into production on daedalus, the load average spiked way up. Brian B and Manoj would get paged. It was caused by using unserialized poll()s rather than unserialized accept()s in the prefork mpm. But that was 200-300 unthreaded processes each using plain ol' vanilla poll() on one or two fd's. I'm thinking we would want to tune for 2-3 processes with the event mpm so this shouldn't be the same situation. That was my feeling aswell. The 'thundering herd' problem isn't as signifigant with a relatively small number of Event MPM Processes, compared to 1000+ Prefork Childern. -Paul
Re: More musings about asynchronous MPMs Re: Event MPM
I dont claim to be a java expert but I use apache extensively and figure i might as well toss in my $.02 Frankly, there are two reasons I hate java. 1) Its a resource hog. Running on a JRE or Virtual Machine setup provides an extra layer of resource requirements that a similar C, C++, etc program would not really have. 2) Its relatively slow in comparison to many modern languages. Mostly due to item 1 and the way that the JRE does cleanup, etc. I recognize that there are also several positive effects that moving to a java based MPM would have. For me, I have recently been subjected repeatedly to the mantra of one of my co-workers: "Use the best tool for the job at hand." Frankly, for MPMs whose main requirement is speed, efficiency, and a large level of scalability, I really dont think that Java is going to be a more useful tool in executing highly scalable MPM designs than C, C++, or some other languages would be. Just my humble opinion, take it or leave it for what its worth. -- Wayne S. Frazee "Any sufficiently developed bug is indistinguishable from a feature."On Monday 25 October 2004 15:51, Brian Pane wrote: > There are a lot of reasons *not* to do so, mostly related to all the > existing httpd-2.0 modules > that wouldn't work. The things that seem appealing about trying a > Java-based httpd, though, > are: > > - The pool memory model at the core of httpd-1.x and -2.x isn't well > suited to MPM > designs where multiple threads need to handle the same > connection--possibly at the same > time, for example when a handler that's generating content needs to push > output buckets > to an I/O completion thread to avoid having to block the (probably > heavyweight) handler > thread or make it event-based. Garbage collection on a per-object basis > would be a lot > easier. > - Modern Java implementations seem to be doing smart things from a > scalability perspective, > like using kqueue/epoll/etc. > - And (minor issue) it's a lot easier to refactor things in Java than in > C, and I expect that > building a good async MPM that handles dynamic content and proxying > effectively will > require a lot of iterations of design trial and error. > > Brian
Re: Event MPM w/ multiple processes
Paul Querna wrote: The updated patch for today adds multiple processes. cool! However, the big thing it doesn't use is accept serialization. hmmm, that would be challenging with a merged listener/event thread. If the event thread is blocked waiting for its turn to accept(), it can't react to a poll popping due to an older connection becoming readable. This means all event threads are listening for incoming clients. The first one to process the incoming connection gets it. This does not block the other event threads, since they set the listening socket to non-blocking before starting their loop. > This seems to work fine on my tests. It has the sucky side effect of waking up threads sometimes when they are not needed, but on a busy server, trying to accept() will likely be fine, as there will be a backlog of clients to accept(). short war story: we had a bug a couple of years ago where whenever we tried putting the latest httpd into production on daedalus, the load average spiked way up. Brian B and Manoj would get paged. It was caused by using unserialized poll()s rather than unserialized accept()s in the prefork mpm. But that was 200-300 unthreaded processes each using plain ol' vanilla poll() on one or two fd's. I'm thinking we would want to tune for 2-3 processes with the event mpm so this shouldn't be the same situation. off to see how a 2.6 kernel gets along with a Pentium Pro/study diffs/etc. Greg
Re: cvs commit: httpd-2.0/server protocol.c
On Tue, 26 Oct 2004 14:51:59 +0100, Ivan Ristic <[EMAIL PROTECTED]> wrote: > > Sure, you may need to have some logic to determine what makes > an attack and what not, but you must have the log entry to > begin with so you feed it to the algorithm. Something I'm still curious about: Was the logging with Apache 1.3 not sufficient (logging only for the timeout error)? It still seems that Apache 2 is going to be logging more than Apache 1.3, which is something that deserves a bit of scrutiny.
Re: cvs commit: httpd-2.0/server protocol.c
> In the case you just mentioned... it is going to take > a special 'filter' to 'sense' that a possible DOS > attack is in progress. Just fair amounts of 'dataless' > connection requests from one or a small number of orgins > doesn't qualify. There are plenty of official > algorithms around now to 'sense' most of these > brute force attacks and ( only then ) pop you an > 'alert' or something. > > Just relying on a gazillion entries in a log file isn't > the right way to 'officially' distinguish a DOS attack > from just ( as Roy says ) 'life on the Internet'. Sure, you may need to have some logic to determine what makes an attack and what not, but you must have the log entry to begin with so you feed it to the algorithm. -- ModSecurity (http://www.modsecurity.org) [ Open source IDS for Web applications ]
Re: cvs commit: httpd-2.0/server protocol.c
Roy T. Fielding wrote: >> What would make more sense is "Error while reading HTTP request line. >> (remote browser didn't send a request?)". This indicates exactly what >> httpd was trying to do when the error occurred, and gives a hint of >> why the error might have occurred. > > We used to have such a message. It was removed from httpd because too > many users complained about the log file growing too fast, particularly > since that is the message which will be logged every time a browser > connects and then its initial request packet gets dropped by the network. > > This is not an error that the server admin can solve -- it is normal > life on the Internet. We really shouldn't be logging it except when > on DEBUG level. As you say, it is normal life on the Internet. I don't think Apache should be hiding the fact that many browsers don't finish with the request line, or timeout in some other way. But the main problem, and it's how this all started, is that without the message it becomes very difficult to detect when you are being attacked. At the same time such attacks are trivial to execute and don't require a fast connection. A smart attacker will open new connections at a very slow rate, just a bit faster then Apache closes them. The only way to figure it out is to be there when it happens or use some other network-level mechanism (netflow, argus, etc), but even that would involve long time of looking at the logs and comparing it to the access logs. As for people complaining about the error log growing too fast, I am sure their access logs grow *much* faster and they handle that without a problem. My point being logging is part of the package. I am OK with assigning this message to a low log level, but I don't think DEBUG is the correct choice. -- ModSecurity (http://www.modsecurity.org) [ Open source IDS for Web applications ]
Re: Environment handling in mod_rewrite on a Windows platform
Bill, Did these changes make it into 1.3.32 -- I noticed a lot of changes in the log for mod_rewrite, but not these ones. Philip William A. Rowe, Jr. wrote: At 12:17 PM 6/21/2004, Philip Gladstone wrote: Hi, I discovered that (on Windows) mod_rewrite.c invokes CreateProcess when creating an external program to handle rewriting requests. It calls CreateProcess and passes 'environ' to be the environment of the called process. Unfortunately, environ is a 'char **' and the argument to CreateProcess is a sequence of strings of the form 'name=value\0'. This results in the invoked process getting a garbage environment which can cause problems. The fix is simple -- replace environ by 0. This signals that the calling process environment is to be copied and then used by the new process. Fix committed for 1.3.32-dev, and is not an issue for 2.0. While you are on the subject; If you had energy to attack it, this module still needs some mutexing to allow concurrent threads, to avoid corrupting the rewrite cache. Again, not an issue in Apache 2.0, but if you were throwing cycles into cleaning up this module I'd be happy to review such a patch. See the ap_xxx_mutex api's and check #ifdef MULTITHREAD to portably determine if/how we create this protection around the rewrite cache. Thanks, Bill -- Philip Gladstone978-ZEN-TOAD (978-936-8623) Cisco Systems, Inc Boxboro, MA smime.p7s Description: S/MIME Cryptographic Signature
Showstopper for 1.3.33
There is currently one showstopper holding up release of 1.3.33. It has 2 votes for and none against, and it's for backing out a patch recently applied in mod_rewrite... Please look it over, otherwise I'll assume a lazy consensus (I know, I know) and apply it.