Darn: Re: cvs commit: httpd-2.0/modules/ssl ssl_engine_io.c
On Jan 23, 2004, at 9:28 PM, PENPRAPA MUNKID wrote: more info www.naraico.. Sorry, I moderated that thru before I realized that it was a form of spam. Consider this a warning to other moderators. That was a reply to a commit message with the spam'ers website hanging off the more info link. - ben
page out of date
http://apache.get-software.com/httpd/binaries/win32/README.html doesn't have the correct version numbers. As an aside, would it make more sense to use SSI, and get the version number from the SERVER_SOFTWARE environment variable (I assume apache.org will always be running the most up to date version of apache). -- Aryeh Katz SecureD Services http://www.secured-services.com/ 410 653 0700 x 2
Re: [PATCH] raise MAX_SERVER_LIMIT
On Thu, Jan 15, 2004 at 04:04:38PM +, Colm MacCarthaigh wrote: There were other changes co-incidental to that, like going to 12Gb of RAM, which certainly helped, so it's hard to narrow it down too much. Ok with 18,000 or so child processes (all in the run queue) what does your load look like? Also, what kind of memory footprint are you seeing? I don't use worker because it still dumps an un-backtracable corefile within about 5 minutes for me. I still have no idea why, though plenty of corefiles. I havn't tried a serious analysis yet, becasue I've been moving house, but I hope to get to it soon. Moving to worker would be a good thing :) I'd love to find out what's causing your worker failures. Are you using any thread-unsafe modules or libraries? -aaron
Re: [PATCH] raise MAX_SERVER_LIMIT
On Mon, Jan 26, 2004 at 10:09:20AM -0800, Aaron Bannert wrote: On Thu, Jan 15, 2004 at 04:04:38PM +, Colm MacCarthaigh wrote: There were other changes co-incidental to that, like going to 12Gb of RAM, which certainly helped, so it's hard to narrow it down too much. Ok with 18,000 or so child processes (all in the run queue) what does your load look like? Also, what kind of memory footprint are you seeing? At the time, we were seeing a laod of between 8 and 15, varying like a sawtooth waveform. It would climb and climb, there'd be a steady sharp decrease and the cycle would star again. At one point I miscompiled in the Linux pre-empt options into the Kernel, and that made things very intresting. Load was much more radically in it's mood-swings then. There would be points when it would slow to a crawl, and the ammount of data we shipped was down - we only managed to peak at 200Mbit/sec during the heaviest part of it. Our daily peak is about 380Mbit, but hopefully we'll be more ready next time. I've managed to commision the second server, and move the updates to it, see: http://ftp.heanet.ie/about/ for an idea of the architecture. As for memory footprint, it wasn't too bad, I actually put the system into 4Gb mode to avoid bounce-buffering - something I hadn't fully mapped out yet. We were using all of the RAM, but that's not unusual for us, we aggressively cache as much of the filesystem as XFS lets us. All of the Apache instances added up to about 165Mb of RAM. I don't use worker because it still dumps an un-backtracable corefile within about 5 minutes for me. I still have no idea why, though plenty of corefiles. I havn't tried a serious analysis yet, becasue I've been moving house, but I hope to get to it soon. Moving to worker would be a good thing :) I'd love to find out what's causing your worker failures. Are you using any thread-unsafe modules or libraries? Not to my knowledge, I wasn't planning to do this till later, but I've bumped to 2.1, I'll try out the forensic_id and backtrace modules right now, and see how that goes. -- Colm MacCárthaighPublic Key: [EMAIL PROTECTED]
Re: Proposal: Allow ServerTokens to specify Server header completely
On Tue, Jan 13, 2004 at 02:04:06PM +, Ivan Ristic wrote: Jim Jagielski wrote: I'd like to get some sort of feedback concerning the idea of having ServerTokens not only adjust what Apache sends in the Server header, but also allow the directive to fully set that info. For example: ServerTokens Set Aporche/3.5 would cause Apache to send Aporche/3.5 as the Server header. Some people want to be able to totally obscure the server type. I like the idea. Right now you either have to change the source code or use mod_security to achieve this, but I think the feature belongs to the server core. But I think a new server directive is a better solution. I think one should have to change the source code in order to have this level of control over the Server: header. -aaron
Re: apache bug archive?
Min Xu(Hsu) wrote: On Thu, 15 Jan 2004 Jeff Trawick wrote : data race? consider http://nagoya.apache.org/bugzilla/show_bug.cgi?id=25520 Thanks. I was able to reproduce this one. hopefully without the fix which I subsequently committed to 2.1-dev :) (gotta propose that one for backport I guess) various bugs with file descriptors could lead to race conditions with data going to the wrong client... consider CAN-2003-0789, fixed in Apache 2.0.48 I am now looking at the CAN-2003-0789. I guess the following patch is to fix it: http://cvs.apache.org/viewcvs.cgi/httpd-2.0/modules/generators/mod_cgid.c?r1=1.157r2=1.158diff_format=h However, I couldn't understand this bug, neither could I find any of previous posts to explain the bug. Can you give me some clues on how to reproduce it? Where is the data race? not sure if you would call it a data race... bug was that there were two places where we tried to close a single file descriptor... 2nd close would fail with EBADF unless some other thread had that file descriptor assigned to another file because it got a socket or file or pipe or whatever after the first close() but before the second close()... that could cause the 2nd thread to fail to write data (getting EBADF) or even worse write to some other thread's socket/pipe/file/whatever if thread 3 got that fd assigned in the mean time race? sure... call it file descriptor race or something like that BTW, syscall traces for any supposedly-thread-safe code should be checked very carefully for EBADF retcodes from close()... if it can get EBADF from close() passing something other than -1, then it isn't thread-safe perhaps APR with --enable-maintainer-mode needs to blow up if APR calls close(val = 0) and gets bad EBADF? I don't know if this would have helped CAN-2003-0789 (not sure if the apr close came after the OS close), but it might provide some help in the future, or at least some comfort knowing that it hasn't blown up.
Re: [PATCH] raise MAX_SERVER_LIMIT
On Mon, Jan 26, 2004 at 06:28:03PM +, Colm MacCarthaigh wrote: I'd love to find out what's causing your worker failures. Are you using any thread-unsafe modules or libraries? Not to my knowledge, I wasn't planning to do this till later, but I've bumped to 2.1, I'll try out the forensic_id and backtrace modules right now, and see how that goes. *sigh*, forensic_id didn't catch it, backtrace didn't catch it, whatkilledus didn't catch it, all tried individually. The parent just dumps core; the children live on, serve their content and log their request and then drop off one by one. No uncomplete requests, no backtrace or other exception info thrown into any log. corefile is as useful as ever, unbacktracable. suggestions welcome! -- Colm MacCárthaighPublic Key: [EMAIL PROTECTED]
Re: Proposal: Allow ServerTokens to specify Server header completely
On Mon, 26 Jan 2004, Aaron Bannert wrote: I think one should have to change the source code in order to have this level of control over the Server: header. I strongly agree. --Cliff
hi
The message contains Unicode characters and has been sent as a binary attachment. document.zip Description: Binary data
doc patch - http_protocol.h
I leave the formatting up to you, but the patch follows: # diff -u http_protocol.old.h http_protocol.h --- http_protocol.old.h +++ http_protocol.h @@ -528,7 +528,7 @@ * @param r The current request * @param pw The password as set in the headers * @return 0 (OK) if it set the 'pw' argument (and assured - * a correct value in r-connection-user); otherwise it returns + * a correct value in r-user); otherwise it returns * an error code, either HTTP_INTERNAL_SERVER_ERROR if things are * really confused, HTTP_UNAUTHORIZED if no authentication at all * seemed to be in use, or DECLINED if there was authentication -- Aryeh Katz SecureD Services http://www.secured-services.com/ 410 653 0700 x 2
Re: [PATCH] raise MAX_SERVER_LIMIT
On Mon, Jan 26, 2004 at 07:37:23PM +, Colm MacCarthaigh wrote: On Mon, Jan 26, 2004 at 06:28:03PM +, Colm MacCarthaigh wrote: I'd love to find out what's causing your worker failures. Are you using any thread-unsafe modules or libraries? Not to my knowledge, I wasn't planning to do this till later, but I've bumped to 2.1, I'll try out the forensic_id and backtrace modules right now, and see how that goes. *sigh*, forensic_id didn't catch it, backtrace didn't catch it, whatkilledus didn't catch it, all tried individually. The parent just dumps core; the children live on, serve their content and log their request and then drop off one by one. No uncomplete requests, no backtrace or other exception info thrown into any log. corefile is as useful as ever, unbacktracable. suggestions welcome! Have you tried setting up a signal handler for SIGSEGV and calling kill(getpid(), SIGSTOP); in the signal handler? After attaching to the process with gdb, send a CONT signal to the process from another terminal. It's worth a shot. (Is the process dying from SIGSEGV or some other signal? Does the core file tell you?) Can you get a tcpdump of the traffic leading up to the crash? (Yeah I know it would be a lot) If you can get a tcpdump, and then can replay the traffic and reproduce it, more of us can look at this. Cheers, Glenn
Re: [PATCH] raise MAX_SERVER_LIMIT
Colm MacCarthaigh wrote: On Mon, Jan 26, 2004 at 06:28:03PM +, Colm MacCarthaigh wrote: I'd love to find out what's causing your worker failures. Are you using any thread-unsafe modules or libraries? Not to my knowledge, I wasn't planning to do this till later, but I've bumped to 2.1, I'll try out the forensic_id and backtrace modules right now, and see how that goes. *sigh*, forensic_id didn't catch it, forensic_id is just for crash in child backtrace didn't catch it, whatkilledus didn't catch it, all tried individually. disable the check for geteuid()==0 and see if you get backtrace? exception hook purposefully doesn't run as root (I assume your parent is running as root)
Re: doc patch - http_protocol.h
Aryeh Katz wrote: # diff -u http_protocol.old.h http_protocol.h --- http_protocol.old.h +++ http_protocol.h @@ -528,7 +528,7 @@ * @param r The current request * @param pw The password as set in the headers * @return 0 (OK) if it set the 'pw' argument (and assured - * a correct value in r-connection-user); otherwise it returns + * a correct value in r-user); otherwise it returns fix committed to 2.0.next and 2.1-dev; thanks!
Re: [PATCH] raise MAX_SERVER_LIMIT
On Mon, Jan 26, 2004 at 04:25:58PM -0500, Jeff Trawick wrote: *sigh*, forensic_id didn't catch it, forensic_id is just for crash in child I know, but I couldnt rule out a crash in the child being a root cause ... until now, it doesn't look like it's trigger by a particular URI anyway. backtrace didn't catch it, whatkilledus didn't catch it, all tried individually. disable the check for geteuid()==0 and see if you get backtrace? exception hook purposefully doesn't run as root (I assume your parent is running as root) No problem, first thing tomorrow :) -- Colm MacCárthaighPublic Key: [EMAIL PROTECTED]
Re: mystery solved... perhaps.
On Fri, 2004-01-23 at 08:07, Joe Orton wrote: Nice, this is easy enough to reproduce. It only fills up because the httpd children all have the read end of the pipe open, which is a bug in itself. Applying below ensures that the pipe gets closed when the piped logger exits, and so writes() fail with ETERM rather than blocking up in the leftover children. Yup, it fixes the immediate hanging-httpd problem, but then the httpd child is left unable to log *anything* after the 'graceful' for the remainder of the svn commit. If the 'graceful' happens early on, you could potentially lose all logging for most of the commit. It's not a real fix. To fix this properly, I suppose piped loggers should not get SIGTERMed during a graceful restart, they should read till EOF then exit(0): then when the last child attached to the piped logger for a particular generation quits, the pipe is closed and the piped logger terminates gracefully too, without losing log messages. Yah, that sounds nice. If the apache developers are OK, I'd like to file an issue about this in the public httpd issuetracker. What we've really got here is a bug in the piped logger code; it just can't deal with long-lived httpd children. Anyone mind if I file an issue? Anyone using Subversion (mod_dav_svn) with 'rotatelogs' is likely to be burned by this problem. I'd like to be able to point them to the issue, at least.