Re: Optimizing dir_merge() AND RE: [BUG] mod_ssl broken

2001-09-20 Thread William A. Rowe, Jr.

From: "Sander Striker" <[EMAIL PROTECTED]>
Sent: Thursday, September 13, 2001 7:30 AM


> Ok, now I have a repro recipe that doesn't require
> mod_dav and mod_dav_svn.

The last commit should have fixed the problem (and does with
your mod_ssl example.)  Could you go back and check mod_dav
with mod_dav_svn to assure I've licked it?

Bill




Re: cvs commit: httpd-2.0/server util_filter.c

2001-09-20 Thread William A. Rowe, Jr.

From: "Roy T. Fielding" <[EMAIL PROTECTED]>
Sent: Thursday, September 20, 2001 5:30 AM


> > That is complete BS.  We have a long standing tradition of NOT making
> > commits just to follow the code style.  There is no need for a vote, because
> > this has been discussed to death and formatting only commits have been
> > vetoed in the past in every thread that they come up in.  Review the archives
> > for Roy and Dean's opinions of formatting changes.  They are completely
> > bogus, and just serve to make CVS hard to use.
> 
> I don't know what you are talking about.  Any code in our repository that
> does not match our style is a bug waiting to happen and will be reformatted
> as soon as I get around to it [otherwise known as "never"].  Readability of
> the code IS a goal of the httpd project.  Under no circumstance will there
> ever be a freeze on changes that make the code easier to read.

Welcome home :-)




Re: Optimizing dir_merge() AND RE: [BUG] mod_ssl broken

2001-09-20 Thread William A. Rowe, Jr.

From: "Sander Striker" <[EMAIL PROTECTED]>
Sent: Thursday, September 13, 2001 7:30 AM


> Ok, now I have a repro recipe that doesn't require
> mod_dav and mod_dav_svn.

Well, I took the easy way out, tried Doug's (using VirtualHost *)
and failed.  Probably would have worked if I tried his _exact_
test case with VirtualHost _default_ :/

Thanks for this case, I have something that I can reproduce.  Nothing
but the standard, (win32) built in modules are loaded besides SSL.

I'll work on this till it's wrapped.  Dang is this one a bear.  I know
why Doug's patch looked right - now I'm suspecting that we miss something
between the dir_walk flip to another vhost and the actual vhost parsing
itself (since VirtualHost * doesn't seem to trigger it.)

Bill




Re: cvs commit: httpd-2.0/server util_filter.c

2001-09-20 Thread Roy T. Fielding

> That is complete BS.  We have a long standing tradition of NOT making
> commits just to follow the code style.  There is no need for a vote, because
> this has been discussed to death and formatting only commits have been
> vetoed in the past in every thread that they come up in.  Review the archives
> for Roy and Dean's opinions of formatting changes.  They are completely
> bogus, and just serve to make CVS hard to use.

I don't know what you are talking about.  Any code in our repository that
does not match our style is a bug waiting to happen and will be reformatted
as soon as I get around to it [otherwise known as "never"].  Readability of
the code IS a goal of the httpd project.  Under no circumstance will there
ever be a freeze on changes that make the code easier to read.

There does not exist any longstanding opinion that such reformats are bad,
simply the longstanding opinion that Dean believes in the one true tab width
and too many people are too lazy about tabs to keep them consistent within
a file.  This doesn't change the FACTS that tabs don't survive cut and paste
and often get mangled in the mail and cause the CVS commitlogs to be
misaligned if some lines begin with tabs and others begin with spaces.

The only rule we have is to not make reformats in the same commit as changes.

The general guideline is that new/modified code within an existing file
should match the tab/space usage of the code around that being modified,
for the simple reason that it makes the cvs log easier to read and doesn't
piss off the person who spends most of their time maintaining that code.

The other general rule is that reformats should take place before major
releases, rather than after them, because otherwise context diffs from our
users get hosed.

None of this is ever necessary in *my* code, because I am not lazy about
following the style guidelines.  Anyone who commits code that doesn't follow
the guidelines has no say in how many times it needs to be reformatted in
the future, for the same reason that people who aren't willing to write
documentation have no vote on its contents.

Roy




Re: -- Apache: Not enough file descriptors --

2001-09-20 Thread dean gaudet

On Tue, 18 Sep 2001, RCHAPACH Rochester wrote:

> Yes, FD_SETSIZE is defined in sys/types.h on UNIX flavored
> systems.  If you set it to a high enough value
> (i.e. #DEFINE FD_SETSIZE 65535 ) before sys/types.h gets included,
> it will override the value set in sys/types.h.

this isn't portable.  (it fails on all linux versions.)

where poll() is supported it's the preferred work-around to select()
lameness.

-dean





Re: [PATCH] Timeout-based DoS attack fix

2001-09-20 Thread dean gaudet

On Thu, 20 Sep 2001, Ian Morgan wrote:

> RecvTimeout 5
>
> This will cause any incoming request to timeout if not completed within 5
> seconds. This will cause the above "null" connections to timeout very
> quickly, thereby significantly reducing the number of wasted waiting server
> instances.

so the next version of the DoS will just send a request and then set its
TCP receive window to something really tiny effectively taking forever to
get the response.

for example, take a look at this "white-hat" program which uses the
technique i just described:  .

not that having multiple configurable timeouts is a bad thing.  i just
wanted to point out that it's not the end of the story :)

-dean




Re: [PATCH] Re: apache-1.3.20 segfault?

2001-09-20 Thread dean gaudet

yeah i considered that, but i don't think rr->filename can be NULL in
1.3... 'cause i don't think you can get rr->status == OK with a NULL
filename...

the only calls to ap_translate_name() which succeed are followed by
ap_directory_walk() which tests for a NULL filename and sets it to a copy
of the URI if it's NULL.  so after directory_walk() you can assume the
filename is not NULL.

dunno if that's still true in 2.0, haven't looked.

-dean

On Thu, 20 Sep 2001, Cliff Woolley wrote:

> On Thu, 20 Sep 2001 [EMAIL PROTECTED] wrote:
>
> > this bug has probably been here forever... i can't imagine any way to
> > exploit it.
>
> Jeff fixed the same bug in 2.0 about a month ago.  His fix was very
> similar to yours, though it did one extra check.  Here's the commit
> message.
>
> --Cliff
>
> --
> trawick 01/08/22 05:07:40
>
>   Modified:.CHANGES
>modules/filters mod_include.c
>   Log:
>   Fix a segfault in mod_include when the original request has no
>   associated filename (e.g., we're filtering the error document for
>   a bad URI).
>
>   Reported by: Joshua Slive
>
>   Revision  ChangesPath
> [snip]
>   1.126 +2 -2  httpd-2.0/modules/filters/mod_include.c
>
>   Index: mod_include.c
>   ===
>   RCS file: /home/cvs/httpd-2.0/modules/filters/mod_include.c,v
>   retrieving revision 1.125
>   retrieving revision 1.126
>   diff -u -r1.125 -r1.126
>   --- mod_include.c   2001/08/18 17:36:26 1.125
>   +++ mod_include.c   2001/08/22 12:07:40 1.126
>   @@ -832,8 +832,8 @@
>for (p = r; p != NULL && !founddupe; p = p->main) {
>   request_rec *q;
>   for (q = p; q != NULL; q = q->prev) {
>   -   if ( (strcmp(q->filename, rr->filename) == 0) ||
>   -(strcmp(q->uri, rr->uri) == 0) ){
>   +   if ((q->filename && rr->filename &&
> (strcmp(q->filename, rr->filename) == 0)) ||
>   +(strcmp(q->uri, rr->uri) == 0)) {
>   founddupe = 1;
>   break;
>   }
>
> --
>Cliff Woolley
>[EMAIL PROTECTED]
>Charlottesville, VA
>
>
>




Re: [PATCH] fix cleanups in cleanups

2001-09-20 Thread Ryan Bloom

On Thursday 20 September 2001 08:12 pm, Aaron Bannert wrote:
> On Thu, Sep 20, 2001 at 05:48:45PM -0700, Greg Stein wrote:
> > On Thu, Sep 20, 2001 at 11:18:55AM -0700, Aaron Bannert wrote:
> > Basically, the above code processes the cleanups in batches. Everything
> > that was initially registered is processed, then everything registerd
> > during the first cleanup round, etc.
>
> That does encourage deeper recursion, would this be a potential problem
> for OSs like Netware that have a rather small and limited stack size?
> I don't really know how much the pool_clear() routines consume stack space.
>
> > It does not maintain the LIFO behavior where cleanup A registers cleanup
> > B and expects B to run *just after* cleanup A finishes. If A wanted that,
> > then it could just calll B. But the most important part: all cleanups
> > *do* get run.
>
> Correct me if I'm wrong, it is a LIFO and what Ryan wants is a FIFO.
>
> LIFO == cleanup registers a cleanup, it gets run after the cleanups
> FIFO == cleanup registers a cleanup, it gets run as soon as this one
> returns
>
> Am I missing something?

Well, I wouldn't say what I want, rather the way cleanups have always worked.

As for the FIFO vs LIFO, I think you have it backwards.

LIFO == cleanup registers a cleanup, it gets run as soon as this one returns
FIFO == cleanup registers a cleanup, it gets run after the the current batch
of cleanups.

Ryan

__
Ryan Bloom  [EMAIL PROTECTED]
Covalent Technologies   [EMAIL PROTECTED]
--



Re: [PATCH] fix cleanups in cleanups

2001-09-20 Thread Aaron Bannert

On Thu, Sep 20, 2001 at 05:48:45PM -0700, Greg Stein wrote:
> On Thu, Sep 20, 2001 at 11:18:55AM -0700, Aaron Bannert wrote:
> >...
> > Does this fix it for you? All testmem tests passed for me and your code
> > above properly flushes "Cleanup" to the file.
> > 
> > (Someone needs to check my work on run_child_cleanups() and make sure
> > that the popping is necessary. It looked to be in the same situation.)
> 
> Calling pop_cleanup() on every iteration is a bit much. Consider the
> following patch:

The reason I went that route, instead of just inlining the pop_cleanup()
code is that it gets called in two places. If it is a performance issue,
I'd just put it all inside the while loop in run_cleanups(). Would
that be preferable?

> 
> while ((c = p->cleanups) != NULL) {
> p->cleanups = NULL;
> run_cleanups(c);
> }
> 
> You don't even have to change run_cleanups or run_child_cleanups.

Wouldn't that go in run_cleanups(), or does this go in apr_pool_clear()?

> Basically, the above code processes the cleanups in batches. Everything that
> was initially registered is processed, then everything registerd during the
> first cleanup round, etc.

That does encourage deeper recursion, would this be a potential problem
for OSs like Netware that have a rather small and limited stack size?
I don't really know how much the pool_clear() routines consume stack space.

> It does not maintain the LIFO behavior where cleanup A registers cleanup B
> and expects B to run *just after* cleanup A finishes. If A wanted that, then
> it could just calll B. But the most important part: all cleanups *do* get
> run.

Correct me if I'm wrong, it is a LIFO and what Ryan wants is a FIFO.

LIFO == cleanup registers a cleanup, it gets run after the cleanups
FIFO == cleanup registers a cleanup, it gets run as soon as this one returns

Am I missing something?

-aaron



Re: [PATCH] Re: apache-1.3.20 segfault?

2001-09-20 Thread Cliff Woolley

On Thu, 20 Sep 2001 [EMAIL PROTECTED] wrote:

> this bug has probably been here forever... i can't imagine any way to
> exploit it.

Jeff fixed the same bug in 2.0 about a month ago.  His fix was very
similar to yours, though it did one extra check.  Here's the commit
message.

--Cliff

--
trawick 01/08/22 05:07:40

  Modified:.CHANGES
   modules/filters mod_include.c
  Log:
  Fix a segfault in mod_include when the original request has no
  associated filename (e.g., we're filtering the error document for
  a bad URI).

  Reported by: Joshua Slive

  Revision  ChangesPath
[snip]
  1.126 +2 -2  httpd-2.0/modules/filters/mod_include.c

  Index: mod_include.c
  ===
  RCS file: /home/cvs/httpd-2.0/modules/filters/mod_include.c,v
  retrieving revision 1.125
  retrieving revision 1.126
  diff -u -r1.125 -r1.126
  --- mod_include.c 2001/08/18 17:36:26 1.125
  +++ mod_include.c 2001/08/22 12:07:40 1.126
  @@ -832,8 +832,8 @@
   for (p = r; p != NULL && !founddupe; p = p->main) {
request_rec *q;
for (q = p; q != NULL; q = q->prev) {
  - if ( (strcmp(q->filename, rr->filename) == 0) ||
  -  (strcmp(q->uri, rr->uri) == 0) ){
  + if ((q->filename && rr->filename &&
(strcmp(q->filename, rr->filename) == 0)) ||
  +(strcmp(q->uri, rr->uri) == 0)) {
founddupe = 1;
break;
}

--
   Cliff Woolley
   [EMAIL PROTECTED]
   Charlottesville, VA





Re: [PATCH] fix cleanups in cleanups

2001-09-20 Thread Ryan Bloom

On Thursday 20 September 2001 05:48 pm, Greg Stein wrote:
> On Thu, Sep 20, 2001 at 11:18:55AM -0700, Aaron Bannert wrote:
> >...
> > Does this fix it for you? All testmem tests passed for me and your code
> > above properly flushes "Cleanup" to the file.
> >
> > (Someone needs to check my work on run_child_cleanups() and make sure
> > that the popping is necessary. It looked to be in the same situation.)
>
> Calling pop_cleanup() on every iteration is a bit much. Consider the
> following patch:

Why is it a bit much?  I just took a quick look at it, it is an if, and three 
assignments.
I would assume that any compiler worth it's salt would inline this function as well.

This patch also keeps the LIFO behavior, which is important, because it means 
that it is much less likely that an item allocated out of the pool when the cleanup
was registered will no longer be there when the cleanup is run.

>
> while ((c = p->cleanups) != NULL) {
> p->cleanups = NULL;
> run_cleanups(c);
> }

Which function is this in?  I have looked, but the only place that I can find to
put this code would be in apr_pool_clear, around the run_cleanups code.

> Basically, the above code processes the cleanups in batches. Everything
> that was initially registered is processed, then everything registerd
> during the first cleanup round, etc.

This makes it more more likely that a variable in the same pool that was available
when the cleanup was registered would not be available when your cleanup
ran.  I would really want to see a performance analysis before we broke that
behavior.

Ryan
__
Ryan Bloom  [EMAIL PROTECTED]
Covalent Technologies   [EMAIL PROTECTED]
--



[PATCH] Re: apache-1.3.20 segfault?

2001-09-20 Thread dean

On Thu, 20 Sep 2001, dean gaudet wrote:

> hrm, is the segfault described below a known bug?  (i haven't tried it...)
>
> -dean
>
> -- Forwarded message --
> From: Jeff Moe <[EMAIL PROTECTED]>
> To: [EMAIL PROTECTED]
> Subject: Re: Serous TUX 2.4.9-J5 problem
>
> Apache 1.3.20 (and presumably earlier) has a similar bug. I noticed this
> during the recent worming. It may be related to Tux's problem. Here's how to
> reproduce it in Apache:
>
> 1) You need to redirect 404s to a 404 document:
> ErrorDocument 404 /fourofour.shtml
> 2) You need be parsing that file:
> AddHandler server-parsed .shtml
> 3) You need to send it a request like:
> http://server.com/test%2fing
>
> Apache will Segfault and you'll get a "Document returned no data error" in
> the browser.
>
> -Jeff

yeah this segfault occurs with 1.3.20 and top of 1.3, but it appears you
need something like:



in the fourofour.shtml.

patch below fixes it.  however i'm not so sure it's exactly the right
fix... but there appear to be other examples where we test if filename !=
NULL.  (boy am i rusty in apache code.)

this bug has probably been here forever... i can't imagine any way to
exploit it.

-dean

Index: include/httpd.h
===
RCS file: /home/cvs/apache-1.3/src/include/httpd.h,v
retrieving revision 1.344
diff -u -r1.344 httpd.h
--- include/httpd.h 2001/08/13 17:09:42 1.344
+++ include/httpd.h 2001/09/21 02:09:27
@@ -806,7 +806,7 @@

 char *unparsed_uri;/* the uri without any parsing performed */
 char *uri; /* the path portion of the URI */
-char *filename;
+char *filename;/* filename if found, otherwise NULL */
 char *path_info;
 char *args;/* QUERY_ARGS, if any */
 struct stat finfo; /* ST_MODE set to zero if no such file */
Index: modules/standard/mod_include.c
===
RCS file: /home/cvs/apache-1.3/src/modules/standard/mod_include.c,v
retrieving revision 1.129
diff -u -r1.129 mod_include.c
--- modules/standard/mod_include.c  2001/07/13 19:45:52 1.129
+++ modules/standard/mod_include.c  2001/09/21 02:09:27
@@ -718,7 +718,7 @@
 for (p = r; p != NULL && !founddupe; p = p->main) {
request_rec *q;
for (q = p; q != NULL; q = q->prev) {
-   if ( (strcmp(q->filename, rr->filename) == 0) ||
+   if ( (q->filename && strcmp(q->filename, rr->filename) == 0) ||
 (strcmp(q->uri, rr->uri) == 0) ){
founddupe = 1;
break;





Re: [PATCH] fix cleanups in cleanups

2001-09-20 Thread Ryan Bloom

On Thursday 20 September 2001 05:48 pm, Greg Stein wrote:
> On Thu, Sep 20, 2001 at 11:18:55AM -0700, Aaron Bannert wrote:
> >...
> > Does this fix it for you? All testmem tests passed for me and your code
> > above properly flushes "Cleanup" to the file.
> >
> > (Someone needs to check my work on run_child_cleanups() and make sure
> > that the popping is necessary. It looked to be in the same situation.)
>
> Calling pop_cleanup() on every iteration is a bit much. Consider the
> following patch:
>
> while ((c = p->cleanups) != NULL) {
> p->cleanups = NULL;
> run_cleanups(c);
> }
>
> You don't even have to change run_cleanups or run_child_cleanups.
>
> Basically, the above code processes the cleanups in batches. Everything
> that was initially registered is processed, then everything registerd
> during the first cleanup round, etc.
>
> It does not maintain the LIFO behavior where cleanup A registers cleanup B
> and expects B to run *just after* cleanup A finishes. If A wanted that,
> then it could just calll B. But the most important part: all cleanups *do*
> get run.

You've got to keep the LIFO behavior, or the kind of problems you posted
about yesterday are more likely.

Ryan

__
Ryan Bloom  [EMAIL PROTECTED]
Covalent Technologies   [EMAIL PROTECTED]
--



Re: pool cleanup

2001-09-20 Thread Greg Stein

On Thu, Sep 20, 2001 at 07:02:58AM -0700, Ryan Bloom wrote:
> On Wednesday 19 September 2001 02:21 pm, Greg Stein wrote:
>...
> > They are not strictly LIFO. You can remove a cleanup and insert a new one
> > at any time. Let's say that the cleanup list looked like:
> >
> > cleanups: A
> >
> > and you add a new one to the "front":
> >
> > cleanups: B A
> >
> > and now case 1, where A needs to rejigger its cleanup param a bit:
> >
> > cleanups: A' B
> >
> > or case 2, where A simply removes its cleanup:
> >
> > cleanups: B
> >
> >
> > Case 2 actually happens quite often.
> 
> This is all true, but it is also orthogonal to this conversation.

Partly. The conversation moved into "what can you do in a cleanup". If you
want to look at the simple issue of registering cleanups... okay. But when
people were expecting to be able to do "anything" in a cleanup... that is
intrinsically incorrect.

> The question we are
> trying to answer here, is can you register a cleanup within a cleanup.

Aaron posted a patch, but it introduces too many function calls in the
processing. I posted one that is much more optimal, processing the cleanups
in batches. That would fix your issue.

> If we are in
> the middle of running the cleanups, and somebody actually calls cleanup_run 
> or cleanup_kill from within a cleanup, they are broken and it may not work.

My above case wasn't talking about doing those from within a cleanup (which
is definitely and always wrong). I was showing how the cleanups could be
reordered; therefore, how you cannot depend upon particular cross-cleanup
ordering dependencies. Thus, you are actually somewhat limited in what kinds
of things you can truly do in a cleanup.

> It also doesn't make any sense, because the reason to run a cleanup, is to perform
> some action sooner than you would have otherwise, but in this case, we are going
> to perform that action in a few seconds anyway.

I don't get this part. A cleanup is to do just that: clean up after yourself
when the pool goes away. It provides a point in time for specific types of
actions. I'm not sure how that gives you "sooner"; if anything, a cleanup is
for running things later.

> Since the two cases above require a programer to either remove or run a cleanup,
> they don't really make sense in the context of registering a cleanup within a 
>cleanup.
> This means that is safe to register a cleanup within a cleanup, assuming the code
> is patched correctly.

Agreed. My point was addressing the "arbitrary work in a cleanup" meme that
was brought up.

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/



Re: [PATCH] fix cleanups in cleanups

2001-09-20 Thread Greg Stein

On Thu, Sep 20, 2001 at 11:18:55AM -0700, Aaron Bannert wrote:
>...
> Does this fix it for you? All testmem tests passed for me and your code
> above properly flushes "Cleanup" to the file.
> 
> (Someone needs to check my work on run_child_cleanups() and make sure
> that the popping is necessary. It looked to be in the same situation.)

Calling pop_cleanup() on every iteration is a bit much. Consider the
following patch:

while ((c = p->cleanups) != NULL) {
p->cleanups = NULL;
run_cleanups(c);
}

You don't even have to change run_cleanups or run_child_cleanups.

Basically, the above code processes the cleanups in batches. Everything that
was initially registered is processed, then everything registerd during the
first cleanup round, etc.

It does not maintain the LIFO behavior where cleanup A registers cleanup B
and expects B to run *just after* cleanup A finishes. If A wanted that, then
it could just calll B. But the most important part: all cleanups *do* get
run.

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/



[PATCH] Timeout-based DoS attack fix

2001-09-20 Thread Ian Morgan

Also submitted as PR#8374.

Summary:
Default Apache distributions (as of 1.3.20) have only a single "Timeout"
directive that controls how long data transmissions and receptions should
wait before timing out.

A Denial of Service (DoS) attack has been hitting many servers that takes
advantage of this single all-encompassing Timeout. This DoS attack starts by
opening many connections to a web server, then leaving said connections open
without making any request. The web server will wait until the Timeout has
elapsed before closing the connections. In the meantime, so many new server
instances have been started that the MaxServer limit can be reached very
quickly, thereby denying any new (possibly legitimate) connections.

The problem is exacerbated since the Timeout is usually set to a high number
of seconds, in the 300 (5 minutes) to 1200 (20 minutes) range. In this
scenario, the above "null" connections would take up to 20 minutes to time
out.

Apache badly needs distinct timeout values for transmissions and receptions.
The patch below does this by adding a new "RecvTimeout" directive, allowing
a much smaller timeout to be specified for receptions. The older "Timeout"
value is primarily used for transmissions, although a few reception cases
are still governed by it. The most significant use of the "RecvTimeout" is
for the initial "GET" request issued by a client.

RecvTimeout 5

This will cause any incoming request to timeout if not completed within 5
seconds. This will cause the above "null" connections to timeout very
quickly, thereby significantly reducing the number of wasted waiting server
instances.

Here's the patch (patches cleanly on standard dist of 1.3.20):
Also available here: http://www.webcon.net/opensource/apache/


diff -ur apache_1.3.20_dist+modssl/src/include/httpd.h 
apache_1.3.20_modified/src/include/httpd.h
--- apache_1.3.20_dist+modssl/src/include/httpd.h   Thu Sep 20 15:34:00 2001
+++ apache_1.3.20_modified/src/include/httpd.h  Thu Sep 20 15:13:45 2001
@@ -277,11 +277,16 @@
 #define MAX_STRING_LEN HUGE_STRING_LEN
 #define HUGE_STRING_LEN 8192

-/* The timeout for waiting for messages */
+/* The timeout for waiting for messages sent */
 #ifndef DEFAULT_TIMEOUT
 #define DEFAULT_TIMEOUT 300
 #endif

+/* The timeout for waiting for messages received */
+#ifndef DEFAULT_RECV_TIMEOUT
+#define DEFAULT_RECV_TIMEOUT 5
+#endif
+
 /* The timeout for waiting for keepalive timeout until next request */
 #ifndef DEFAULT_KEEPALIVE_TIMEOUT
 #define DEFAULT_KEEPALIVE_TIMEOUT 15
@@ -993,7 +998,8 @@
 /* Transaction handling */

 server_addr_rec *addrs;
-int timeout;   /* Timeout, in seconds, before we give up */
+int timeout;   /* Timeout, in seconds, before we give up (general)*/
+int recv_timeout;  /* Timeout, in seconds, before we give up on receives*/
 int keep_alive_timeout;/* Seconds we'll wait for another request */
 int keep_alive_max;/* Maximum requests per connection */
 int keep_alive;/* Use persistent connections? */
diff -ur apache_1.3.20_dist+modssl/src/main/http_config.c 
apache_1.3.20_modified/src/main/http_config.c
--- apache_1.3.20_dist+modssl/src/main/http_config.cThu Sep 20 15:34:00 2001
+++ apache_1.3.20_modified/src/main/http_config.c   Thu Sep 20 15:08:09 2001
@@ -1467,6 +1467,7 @@
 s->srm_confname = NULL;
 s->access_confname = NULL;
 s->timeout = 0;
+s->recv_timeout = 0;
 s->keep_alive_timeout = 0;
 s->keep_alive = -1;
 s->keep_alive_max = -1;
@@ -1524,6 +1525,9 @@
if (virt->timeout == 0)
virt->timeout = main_server->timeout;

+   if (virt->recv_timeout == 0)
+   virt->recv_timeout = main_server->recv_timeout;
+
if (virt->keep_alive_timeout == 0)
virt->keep_alive_timeout = main_server->keep_alive_timeout;

@@ -1591,6 +1595,7 @@
 s->limit_req_fieldsize = DEFAULT_LIMIT_REQUEST_FIELDSIZE;
 s->limit_req_fields = DEFAULT_LIMIT_REQUEST_FIELDS;
 s->timeout = DEFAULT_TIMEOUT;
+s->recv_timeout = DEFAULT_RECV_TIMEOUT;
 s->keep_alive_timeout = DEFAULT_KEEPALIVE_TIMEOUT;
 s->keep_alive_max = DEFAULT_KEEPALIVE;
 s->keep_alive = 1;
diff -ur apache_1.3.20_dist+modssl/src/main/http_core.c 
apache_1.3.20_modified/src/main/http_core.c
--- apache_1.3.20_dist+modssl/src/main/http_core.c  Fri Mar  9 05:10:25 2001
+++ apache_1.3.20_modified/src/main/http_core.c Thu Sep 20 14:29:52 2001
@@ -2125,6 +2125,17 @@
 return NULL;
 }

+static const char *set_recv_timeout(cmd_parms *cmd, void *dummy, char *arg)
+{
+const char *err = ap_check_cmd_context(cmd, NOT_IN_DIR_LOC_FILE|NOT_IN_LIMIT);
+if (err != NULL) {
+return err;
+}
+
+cmd->server->recv_timeout = atoi(arg);
+return NULL;
+}
+
 static const char *set_keep_alive_timeout(cmd_parms *cmd, void *dummy,
  char *arg)
 {
@@ -3090,6 +3101,7 @@
 { "ServerPath", set_server

Re: [Fwd: Re: Is building Apache 1.3.20 with Solaris CC 6.0 or 5.0 possible?]

2001-09-20 Thread Justin Erenkrantz

On Thu, Sep 20, 2001 at 01:20:05PM -0700, Danek Duvall wrote:
> Do you know why Nick is using the C++ compiler (CC) instead of the C
> compiler (c)?  Apache builds just fine under Solaris 8 and 9 with the Forte
> 6.0 C compiler, but gives the same errors that Nick gets if I use CC
> instead.
> 
> I dunno if that's supposed to work, but he might try g++ and see if that
> gets any further ...

Oh, damn.  I got confused.  You are right - it should be cc not 
CC.  Duh.  It works here.  -- justin




Re: [Fwd: Re: Is building Apache 1.3.20 with Solaris CC 6.0 or 5.0 possible?]

2001-09-20 Thread Danek Duvall

Do you know why Nick is using the C++ compiler (CC) instead of the C
compiler (c)?  Apache builds just fine under Solaris 8 and 9 with the Forte
6.0 C compiler, but gives the same errors that Nick gets if I use CC
instead.

I dunno if that's supposed to work, but he might try g++ and see if that
gets any further ...

Danek



[PATCH] update to default worker MPM config to match MaxClients fix

2001-09-20 Thread Joshua Slive



> -Original Message-
> From: Ryan Bloom [mailto:[EMAIL PROTECTED]]

> > I think we should also rename MaxClients to MaxWorkers.
>
> I dislike this.  MaxClients still makes sense IMHO.  It is the
> maximum number
> of clients allowed at one time.  MaxWorkers is the maximum number
> of things
> (threads or processes) running at one time.  MaxClients just
> seems easier to
> explain to me.

I understand what you mean, but I disagree.  If you look at my example

> StartWorkers  50
> MaxWorkers   150
> MinSpareWorkers   10
> MaxSpareWorkers   50
> WorkersPerProcess 25
> MaxRequestsPerProcess  0

the beauty is that everything is in the same units and is referenced the
same way.  If you change MaxWorkers to MaxClients, the first question that a
new user would ask is "how do clients differ from workers?".  The answer is
they don't, except in an obscure theoretical way.

I support StartWorkers rather than StartProcesses for the same reason.  I
agree with you that it means lots of stuff going on behind the curtains.
But I think it is worth it to have clear configuration directives.  The docs
(and perhaps the config file) should simply say "MaxWorkers and StartWorkers
should be multiples of WorkersPerProcess."

Joshua.




Re: [PATCH] update to default worker MPM config to match MaxClients fix

2001-09-20 Thread Ryan Bloom

On Thursday 20 September 2001 12:10 pm, Bill Stoddard wrote:
> Now for something completely different... I am just throwing out some
> stream of consciencness thoughts.
>
> Definition - "Server" is a "process". Could replace all occurences of
> "Server" below with "Process" or "Child". No explicit use of a term
> equivakent to "the thing that handles a request".
>
> StartServers - number of processes started. Same as in 1.3
> MaxClients - Max number of connected clients that can be supported. Same as
> in 1.3 MaxRequestPerServer - Same as in 1.3
> MinSpareServers - Same as in 1.3
> MaxSpareServers - Same as in 1.3
> ThreadsPerServer - Specific to threaded MPMs.
>
> We loose a bit of control, but only a bit.  The ability of the server to
> handle client connections grows or shrinks by "threadsperserver' quanta.
>
> Thoughts?

I'm missing something here.  Those directives are defined exactly as they
exist today.  If we use this model, we don't lose any control at all.  What
don't I see?

Ryan

__
Ryan Bloom  [EMAIL PROTECTED]
Covalent Technologies   [EMAIL PROTECTED]
--



Re: 2.0.24 STATUS file

2001-09-20 Thread Ryan Bloom

On Thursday 20 September 2001 12:24 pm, Farag, Hany M (Hany) wrote:
> Hi,
> We read this in the STATUS file of 2.0.24:
>
> "There is a bug in how we sort some hooks, at least the pre-config
> hook.  The first time we call the hooks, they are in the correct
> order, but the second time, we don't sort them correctly.  Currently,
> the modules/http/config.m4 file has been renamed to
> modules/http/config2.m4 to work around this problem, it should moved
> back when this is fixed."
>
> But we don't understand the exact problem or the fix... Can you please
> explain..
> Also, if we did not rename the config.m4 files to config2.m4, Can it cause
> a seg fault?

This bug didn't cause seg faults.  It just means that in some instances, the
hooks aren't sorted correctly, so that a function registered bt the http 
module was being called in the wrong order.  I am not even sure that this bug 
still exists, I need to go back in and check.

Ryan

__
Ryan Bloom  [EMAIL PROTECTED]
Covalent Technologies   [EMAIL PROTECTED]
--



Re: [PATCH] update to default worker MPM config to match MaxClients fix

2001-09-20 Thread Rodent of Unusual Size

Bill Stoddard wrote:
> 
> Definition - "Server" is a "process".

As before, I harbour a *very* strong dislike for using
the word 'server' to refer to anything other than the global
HTTP-handling-thing managed by apachectl.  It just confuses
the issue.
-- 
#kenP-)}

Ken Coar, Sanagendamgagwedweinini  http://Golux.Com/coar/
Author, developer, opinionist  http://Apache-Server.Com/

"All right everyone!  Step away from the glowing hamburger!"



Re: [PATCH] update to default worker MPM config to match MaxClients fix

2001-09-20 Thread Ryan Bloom

On Thursday 20 September 2001 11:41 am, Joshua Slive wrote:
> > -Original Message-
> > From: Bill Stoddard [mailto:[EMAIL PROTECTED]]
> >
> > This last one is inconsistent with your other changes.  In the
> > threaded MPM, a 'Server' by
> > your defn is a thread. MaxRequestsPerChild is used to limit the
> > number of requests a
> > 'process' serves before going away.
>
> Yes.  That's right.
>
> > In past discussions, we have almost settled on the notion of a
> > "worker" as being the thing
> > capable of serving a request.
>
> Fine.  I don't mind "worker" instead of "server".  (The only disadvantage
> is that prefork needs to change.  But that's not a big deal.)
>
> I think we should also rename MaxClients to MaxWorkers.

I dislike this.  MaxClients still makes sense IMHO.  It is the maximum number
of clients allowed at one time.  MaxWorkers is the maximum number of things
(threads or processes) running at one time.  MaxClients just seems easier to
explain to me.

Ryan

__
Ryan Bloom  [EMAIL PROTECTED]
Covalent Technologies   [EMAIL PROTECTED]
--



Re: [PATCH] update to default worker MPM config to match MaxClients fix

2001-09-20 Thread Ryan Bloom

On Thursday 20 September 2001 11:35 am, Sander Temme wrote:
> on 9/20/01 10:19 AM, Joshua Slive at [EMAIL PROTECTED] wrote:
> > The last one I'm not sure of, because I don't know whether this is
> > actually measured per thread or per process.  Perhaps it should be
> > MaxRequestsPerProcess.
>
> Or MaxConnectionsPerProcess, as we count multiple KeepAlive requests as one
> towards MaxRequestsPerChild.

Please don't do that.  This was brought up years ago by Manoj, and while the
directive is not exactly accurate, it is not nearly as confusing as trying to explain
why connections != requests.

Ryan
__
Ryan Bloom  [EMAIL PROTECTED]
Covalent Technologies   [EMAIL PROTECTED]
--



RE: 2.0.24 STATUS file

2001-09-20 Thread Farag, Hany M (Hany)

Hi,
We read this in the STATUS file of 2.0.24:

"There is a bug in how we sort some hooks, at least the pre-config
hook.  The first time we call the hooks, they are in the correct 
order, but the second time, we don't sort them correctly.  Currently,
the modules/http/config.m4 file has been renamed to 
modules/http/config2.m4 to work around this problem, it should moved
back when this is fixed."

But we don't understand the exact problem or the fix... Can you please
explain..
Also, if we did not rename the config.m4 files to config2.m4, Can it cause a
seg fault?

Thanks
Hany
 
-Original Message-
From: Farag, Hany M (Hany) 
Sent: Thursday, September 20, 2001 2:25 PM
To: '[EMAIL PROTECTED]'
Subject: RE: Debugging Apache2.0 ...


Hi,
I'm trying to debug Apache 2.0, I changed the log level in the httpd.conf
file to debug, also i used the ap_log_rerror(,) in my code to see the
values and other debuging info but cann't see any thing just the evil seg
fault message.
Is there any other method i can use.

Thanks
Hany

-Original Message-
From: Farag, Hany M (Hany) 
Sent: Tuesday, September 18, 2001 6:22 PM
To: [EMAIL PROTECTED]
Subject: RE: How to build Apache2.0 with more than one module


Thank you all for your help.
I can see the 2 modules included in the build.
I was missing the Makefile.in
Thanks
Hany

-Original Message-
From: Ryan Bloom [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, September 18, 2001 5:34 PM
To: [EMAIL PROTECTED]; Farag, Hany M (Hany)
Subject: Re: How to build Apache2.0 with more than one module


On Tuesday 18 September 2001 01:53 pm, Farag, Hany M (Hany) wrote:

Did you put a Makefile.in into the one directory?

> yes, it looks like this:
>
> dnl modules enabled in this directory by default
>
> dnl APACHE_MODULE(name, helptext[, objects[, structname[, default[,
> config)
>
> APACHE_MODPATH_INIT(one)
>
> APACHE_MODULE(one, testing module one, , , yes)
>
> APR_ADDTO(LT_LDFLAGS,-export-dynamic)
>
> APACHE_MODPATH_FINISH

__
Ryan Bloom  [EMAIL PROTECTED]
Covalent Technologies   [EMAIL PROTECTED]
--



Re: [PATCH] update to default worker MPM config to match MaxClients fix

2001-09-20 Thread Bill Stoddard

Now for something completely different... I am just throwing out some stream of
consciencness thoughts.

Definition - "Server" is a "process". Could replace all occurences of "Server" below 
with
"Process" or "Child". No explicit use of a term equivakent to "the thing that handles a
request".

StartServers - number of processes started. Same as in 1.3
MaxClients - Max number of connected clients that can be supported. Same as in 1.3
MaxRequestPerServer - Same as in 1.3
MinSpareServers - Same as in 1.3
MaxSpareServers - Same as in 1.3
ThreadsPerServer - Specific to threaded MPMs.

We loose a bit of control, but only a bit.  The ability of the server to handle client
connections grows or shrinks by "threadsperserver' quanta.

Thoughts?

Bill

> StartWorkers  50
> MaxWorkers   150
> MinSpareWorkers   10
> MaxSpareWorkers   50
> WorkersPerProcess 25
> MaxRequestsPerProcess  0
>


> > Okay, changing topics only slightly... how about we replace
> > MinSpare[Threads|Servers|Workers] and
> > MaxSpare[Threads|Servers|workers] with a single
> > directive, Spare[Threads|Servers|Workers]?
>
> I don't understand that.  There needs to be some notion of slack, so that
> the server is not constantly starting and killing threads/processes to keep
> the correct number of spares.

I was thinking we could just maintain the slack internally and hide the extra config
directive. Not worth discussing further...

>
> Joshua.
>




Re: [ENHANCEMENT] htpasswd utility with DBM support

2001-09-20 Thread William A. Rowe, Jr.

From: "sterling" <[EMAIL PROTECTED]>
Sent: Thursday, September 20, 2001 1:50 PM


> Did this get dropped??
> 
> I believe this functionality is a requirement.  If anyone wants to use
> auth_dbm with apr_dbm, there is currently no reliable way to generate the
> userdatabase for the dbm their apr is built with without writing c code.
> 
> This patch *does* work, though I agree with wrowe's assessment.  Anyone
> have the time to do it right?

When Mladen and I last bounced ideas back and forth, it seemed simplest to
keep htpasswd doing what htpasswd does.

He was going to hack together an htdbm (or some such) utility who's args
mirror the current options in dbmmanage.  So as of the moment, I believe
he's started to hack it.

That htdbm must include the group (groups list) and the trailing 'comment',
just as dbmmanage offers.

I would still -love- to see apr use multiple dblibs - better yet - autodetect
which db should be used use when it attempts to open one.  That might be
going a bit far, though ;)

Bill





Re: [PATCH] update to default worker MPM config to match MaxClients fix

2001-09-20 Thread Bill Stoddard

Just an FYI... I was an advocate of pretty much your earlier suggestion (the thing that
handles a request is a 'server'). That was shot down (forget by who). I actually prefer
'server'. FWIW :-)

Bill

> > -Original Message-
> > From: Bill Stoddard [mailto:[EMAIL PROTECTED]]
>
> > This last one is inconsistent with your other changes.  In the
> > threaded MPM, a 'Server' by
> > your defn is a thread. MaxRequestsPerChild is used to limit the
> > number of requests a
> > 'process' serves before going away.
>
> Yes.  That's right.
>
> >
> > In past discussions, we have almost settled on the notion of a
> > "worker" as being the thing
> > capable of serving a request.
>
> Fine.  I don't mind "worker" instead of "server".  (The only disadvantage is
> that prefork needs to change.  But that's not a big deal.)
>
> I think we should also rename MaxClients to MaxWorkers.
>
> > StartWorkers - ??? What do we want the option to do? Startup this
> > number of worker threads
> > or startup this number of child processes?
>
> I would like to see StartWorkers which would behave very similarly to how
> Aaron has designed MaxClients/MaxWorkers; ie. it would automatically set the
> number of child processes to launch to guarentee StartWorkers total threads.
> I do, however, see a potential problem with configuration getting fragile
> with all this stuff going on behind the scenes.
>
> To sum up, my proposal for worker is then
> StartWorkers  50
> MaxWorkers   150
> MinSpareWorkers   10
> MaxSpareWorkers   50
> WorkersPerProcess 25
> MaxRequestsPerProcess  0
>
> Perfork could work exactly the same with the absence of WorkersPerProcess.
> PerChild would need a little more thought.
>
> These are all just name changes except StartWorkers and MaxWorkers which use
> Aaron's logic to derive process numbers.
>
> > Okay, changing topics only slightly... how about we replace
> > MinSpare[Threads|Servers|Workers] and
> > MaxSpare[Threads|Servers|workers] with a single
> > directive, Spare[Threads|Servers|Workers]?
>
> I don't understand that.  There needs to be some notion of slack, so that
> the server is not constantly starting and killing threads/processes to keep
> the correct number of spares.
>
> Joshua.
>




RE: [PATCH] update to default worker MPM config to match MaxClients fix

2001-09-20 Thread Joshua Slive



> -Original Message-
> From: Bill Stoddard [mailto:[EMAIL PROTECTED]]

> This last one is inconsistent with your other changes.  In the
> threaded MPM, a 'Server' by
> your defn is a thread. MaxRequestsPerChild is used to limit the
> number of requests a
> 'process' serves before going away.

Yes.  That's right.

>
> In past discussions, we have almost settled on the notion of a
> "worker" as being the thing
> capable of serving a request.

Fine.  I don't mind "worker" instead of "server".  (The only disadvantage is
that prefork needs to change.  But that's not a big deal.)

I think we should also rename MaxClients to MaxWorkers.

> StartWorkers - ??? What do we want the option to do? Startup this
> number of worker threads
> or startup this number of child processes?

I would like to see StartWorkers which would behave very similarly to how
Aaron has designed MaxClients/MaxWorkers; ie. it would automatically set the
number of child processes to launch to guarentee StartWorkers total threads.
I do, however, see a potential problem with configuration getting fragile
with all this stuff going on behind the scenes.

To sum up, my proposal for worker is then
StartWorkers  50
MaxWorkers   150
MinSpareWorkers   10
MaxSpareWorkers   50
WorkersPerProcess 25
MaxRequestsPerProcess  0

Perfork could work exactly the same with the absence of WorkersPerProcess.
PerChild would need a little more thought.

These are all just name changes except StartWorkers and MaxWorkers which use
Aaron's logic to derive process numbers.

> Okay, changing topics only slightly... how about we replace
> MinSpare[Threads|Servers|Workers] and
> MaxSpare[Threads|Servers|workers] with a single
> directive, Spare[Threads|Servers|Workers]?

I don't understand that.  There needs to be some notion of slack, so that
the server is not constantly starting and killing threads/processes to keep
the correct number of spares.

Joshua.




Re: Debugging Apache2.0 ...

2001-09-20 Thread Ryan Bloom

On Thursday 20 September 2001 11:24 am, Farag, Hany M (Hany) wrote:

Run it in a debugger with the -X command line option.

Ryan

> Hi,
> I'm trying to debug Apache 2.0, I changed the log level in the httpd.conf
> file to debug, also i used the ap_log_rerror(,) in my code to see the
> values and other debuging info but cann't see any thing just the evil seg
> fault message.
> Is there any other method i can use.
>
> Thanks
> Hany
>
> -Original Message-
> From: Farag, Hany M (Hany)
> Sent: Tuesday, September 18, 2001 6:22 PM
> To: [EMAIL PROTECTED]
> Subject: RE: How to build Apache2.0 with more than one module
>
>
> Thank you all for your help.
> I can see the 2 modules included in the build.
> I was missing the Makefile.in
> Thanks
> Hany
>
> -Original Message-
> From: Ryan Bloom [mailto:[EMAIL PROTECTED]]
> Sent: Tuesday, September 18, 2001 5:34 PM
> To: [EMAIL PROTECTED]; Farag, Hany M (Hany)
> Subject: Re: How to build Apache2.0 with more than one module
>
>
> On Tuesday 18 September 2001 01:53 pm, Farag, Hany M (Hany) wrote:
>
> Did you put a Makefile.in into the one directory?
>
> > yes, it looks like this:
> >
> > dnl modules enabled in this directory by default
> >
> > dnl APACHE_MODULE(name, helptext[, objects[, structname[, default[,
> > config)
> >
> > APACHE_MODPATH_INIT(one)
> >
> > APACHE_MODULE(one, testing module one, , , yes)
> >
> > APR_ADDTO(LT_LDFLAGS,-export-dynamic)
> >
> > APACHE_MODPATH_FINISH
>
> __
> Ryan Bloom[EMAIL PROTECTED]
> Covalent Technologies [EMAIL PROTECTED]
> --

-- 

__
Ryan Bloom  [EMAIL PROTECTED]
Covalent Technologies   [EMAIL PROTECTED]
--



Re: [PATCH] update to default worker MPM config to match MaxClients fix

2001-09-20 Thread Ryan Bloom

On Thursday 20 September 2001 11:14 am, Bill Stoddard wrote:
> This last one is inconsistent with your other changes.  In the threaded
> MPM, a 'Server' by your defn is a thread. MaxRequestsPerChild is used to
> limit the number of requests a 'process' serves before going away.
>
> In past discussions, we have almost settled on the notion of a "worker" as
> being the thing capable of serving a request.
>
> MinSpareWorkers
> MaxSpareWorkers
>
> StartWorkers - ??? What do we want the option to do? Startup this number of
> worker threads or startup this number of child processes?

I think we want to keep Start* as the number of processes to start, because 
otherwise, we have to deal with people asking for 30 threads to start, and 25
threads per process.  In that case, we would have to tweak their values.  I would
prefer to do that as little as possible.  

> WorkersPerProcess
> MaxRequestsPerChild (or MaxRequestsPerProcess)
>
>
> Okay, changing topics only slightly... how about we replace
> MinSpare[Threads|Servers|Workers] and MaxSpare[Threads|Servers|workers]
> with a single directive, Spare[Threads|Servers|Workers]?

How would you do that?  We want the range, so that we don't have to kill
of servers unless we have too many.  I guess my only complaint with a single
directive, is that it is removing control.  Instead of being able to say, leave
between 5 and 10 processes waiting idle, we have to say leave 8 processes
waiting idle.

Ryan

__
Ryan Bloom  [EMAIL PROTECTED]
Covalent Technologies   [EMAIL PROTECTED]
--



FW: [donotreply@Apache.Org: Re: OpenBSD + Apache as heavy loaded webserver and the cgi problem]

2001-09-20 Thread Apache Software Foundation

Months-old misfiled mail.. not acked.

- Forwarded message from Henning Brauer <[EMAIL PROTECTED]> -

From: Henning Brauer <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]
Subject: [[EMAIL PROTECTED]: Re: OpenBSD + Apache as heavy loaded webserver and 
the cgi problem]
Date: Mon, 15 Jan 2001 04:23:46 +0100
User-Agent: Mutt/1.2.5i

may fit on your performance tuning page...

From: Henning Brauer <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Subject: OpenBSD + Apache as heavy loaded webserver and the cgi problem
Date: Mon, 15 Jan 2001 03:37:17 +0100

Hi all,

a while ago I sent some apache header file modifications to run under heavy
load on OpenBSD. Unfortunately the original problem came up again, the
server was unable to start CGI's. I found a solution now:

in /usr/src/sys/sys/syslimits.h, change:

#define CHILD_MAX 512
#define OPEN_MAX 512

(defining them as option in kernel config does NOT work!)
in your kernel config:

maxusers 512
option NMBCLUSTERS=8192
option NKMEMCLUSTERS=8192
option MAX_KMAP=120
option MAX_KMAPENT=6000

in apache's httpd.h:

#define HARD_SERVER_LIMIT 2048

and, after recompiling, adjust the parameters in your apache config file,
especially MaxClients, MaxSpareServers, MinSpareServers and the Keepalive
stuff. Note that starting new apache processes is expensive, so don't set
the startservers and spareservers stuff too low.
The optimal values may differ depending on your load.
Whenever you see error messages like "couldn't spawn child process" the
values for CHILD_MAX and OPEN_MAX could be your problem. Also play with
FD_SETSIZE in apache's ap_config.h, I'm still figuring out the optimal
value for my setup.

Greetings

Henning

-- 
Henning Brauer | BS Web Services
Hostmaster BSWS| Roedingsmarkt 14
[EMAIL PROTECTED] | 20459 Hamburg
http://www.bsws.de | Germany

- End forwarded message -

-- 
#kenP-)}

Ken Coar, Sanagendamgagwedweinini  http://Golux.Com/coar/
Author, developer, opinionist  http://Apache-Server.Com/

"All right everyone!  Step away from the glowing hamburger!"



Re: [PATCH] update to default worker MPM config to matchMaxClients fix

2001-09-20 Thread Sander Temme

on 9/20/01 10:19 AM, Joshua Slive at [EMAIL PROTECTED] wrote:

> The last one I'm not sure of, because I don't know whether this is actually
> measured per thread or per process.  Perhaps it should be
> MaxRequestsPerProcess.

Or MaxConnectionsPerProcess, as we count multiple KeepAlive requests as one
towards MaxRequestsPerChild.

S.

-- 
Covalent Technologies [EMAIL PROTECTED]
Engineering groupVoice: (415) 536 5214
645 Howard St. Fax: (415) 536 5210
San Francisco CA 94105

   PGP Fingerprint: 1E74 4E58 DFAC 2CF5 6A03  5531 AFB1 96AF B584 0AB1

===
This email message is for the sole use of the intended recipient(s) and may
contain confidential and privileged information. Any unauthorized review,
use, disclosure or distribution is prohibited.  If you are not the intended
recipient, please contact the sender by reply email and destroy all copies
of the original message
===




Re: [PATCH] update to default worker MPM config to match MaxClients fix

2001-09-20 Thread Ryan Bloom

On Thursday 20 September 2001 10:53 am, Aaron Bannert wrote:
> On Thu, Sep 20, 2001 at 10:51:16AM -0700, Ryan Bloom wrote:
> > This has been discussed a lot on list, but we never really come to a
> > conclusion. I would suggest that we just change the names, and let the
> > flames fall where they may.
> >
> > I like the idea of changing StartServers to StartProcesses, and Min/Max
> > SpareThreads to Min/Max SpareServers.  We do not want to change
> > MaxRequestsPerChild though, because we are still talking about the
> > maximum number of requests each child process will server.  In threaded
> > and worker, we count requests for the whole child process, not for each
> > thread.  I also would not change ThreadsPerChild, because we are talking
> > about the number of threads in each child process.
>
> If we are going to only change names and not definitions (which is what
> I think Ryan is suggesting), then I'd rather we did it after this patch
> goes through.

I agree, this patch should be committed regardless.

Ryan
__
Ryan Bloom  [EMAIL PROTECTED]
Covalent Technologies   [EMAIL PROTECTED]
--



RE: Debugging Apache2.0 ...

2001-09-20 Thread Farag, Hany M (Hany)

Hi,
I'm trying to debug Apache 2.0, I changed the log level in the httpd.conf
file to debug, also i used the ap_log_rerror(,) in my code to see the
values and other debuging info but cann't see any thing just the evil seg
fault message.
Is there any other method i can use.

Thanks
Hany

-Original Message-
From: Farag, Hany M (Hany) 
Sent: Tuesday, September 18, 2001 6:22 PM
To: [EMAIL PROTECTED]
Subject: RE: How to build Apache2.0 with more than one module


Thank you all for your help.
I can see the 2 modules included in the build.
I was missing the Makefile.in
Thanks
Hany

-Original Message-
From: Ryan Bloom [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, September 18, 2001 5:34 PM
To: [EMAIL PROTECTED]; Farag, Hany M (Hany)
Subject: Re: How to build Apache2.0 with more than one module


On Tuesday 18 September 2001 01:53 pm, Farag, Hany M (Hany) wrote:

Did you put a Makefile.in into the one directory?

> yes, it looks like this:
>
> dnl modules enabled in this directory by default
>
> dnl APACHE_MODULE(name, helptext[, objects[, structname[, default[,
> config)
>
> APACHE_MODPATH_INIT(one)
>
> APACHE_MODULE(one, testing module one, , , yes)
>
> APR_ADDTO(LT_LDFLAGS,-export-dynamic)
>
> APACHE_MODPATH_FINISH

__
Ryan Bloom  [EMAIL PROTECTED]
Covalent Technologies   [EMAIL PROTECTED]
--



[PATCH] fix cleanups in cleanups (Was Re: New post-log-transaction hook?)

2001-09-20 Thread Aaron Bannert

On Wed, Sep 19, 2001 at 12:27:35PM -0700, Jon Travis wrote:
> BZzzzt.  The attached code registers a cleanup from within a cleanup, and
> does so 'correctly'.  See the program attached at the bottom, which behaves 
> incorrectly.  It is simple code, but not knowing that a given
> function registers a cleanup can cause major problems (leaking
> file descriptors, etc. eventually).  The file should contain 'Cleanup',
> because the cleanup of the file should flush the buffer -- that
> cleanup is never run, though.
> 
> > when the cleanup is registered, it is gauranteed to be there when the cleanup
> > is run.
> > 
> > Anything else is completely broken.
> 
> 
> #include "apr.h"
> #include "apr_file_io.h"
> 
> static apr_status_t my_cleanup(void *cbdata){
> apr_pool_t *p = cbdata;
> apr_file_t *file;
> 
> apr_file_open(&file, "/tmp/bonk", 
> APR_WRITE | APR_CREATE | APR_TRUNCATE | APR_BUFFERED,
> APR_OS_DEFAULT, p);
> apr_file_printf(file, "Cleanup");
> return APR_SUCCESS;
> }
> 
> int main(int argc, char *argv[]){
> apr_pool_t *pool;
> 
> apr_initialize();
> apr_pool_create(&pool, NULL);
> apr_pool_cleanup_register(pool, pool, my_cleanup, NULL);
> apr_pool_destroy(pool);
> apr_terminate();
> return 0;
> }



Does this fix it for you? All testmem tests passed for me and your code
above properly flushes "Cleanup" to the file.

(Someone needs to check my work on run_child_cleanups() and make sure
that the popping is necessary. It looked to be in the same situation.)

-aaron


Index: memory/unix/apr_pools.c
===
RCS file: /home/cvspublic/apr/memory/unix/apr_pools.c,v
retrieving revision 1.111
diff -u -r1.111 apr_pools.c
--- memory/unix/apr_pools.c 2001/09/17 20:12:23 1.111
+++ memory/unix/apr_pools.c 2001/09/20 18:06:46
@@ -564,7 +564,8 @@
 struct process_chain;
 struct cleanup;
 
-static void run_cleanups(struct cleanup *c);
+static struct cleanup *pop_cleanup(apr_pool_t *p);
+static void run_cleanups(apr_pool_t *p);
 static void free_proc_chain(struct process_chain *p);
 
 static apr_pool_t *permanent_pool;
@@ -764,26 +765,35 @@
 return (*cleanup) (data);
 }
 
-static void run_cleanups(struct cleanup *c)
+static struct cleanup *pop_cleanup(apr_pool_t *p)
 {
-while (c) {
-   (*c->plain_cleanup) ((void *)c->data);
-   c = c->next;
+struct cleanup *c;
+if ((c = p->cleanups)) {
+p->cleanups = c->next;
+c->next = NULL;
 }
+return c;
 }
 
-static void run_child_cleanups(struct cleanup *c)
+static void run_cleanups(apr_pool_t *p)
 {
-while (c) {
+struct cleanup *c;
+while ((c = pop_cleanup(p))) {
+(*c->plain_cleanup) ((void *)c->data);
+}
+}
+
+static void run_child_cleanups(apr_pool_t *p)
+{
+struct cleanup *c;
+while ((c = pop_cleanup(p))) {
(*c->child_cleanup) ((void *)c->data);
-   c = c->next;
 }
 }
 
 static void cleanup_pool_for_exec(apr_pool_t *p)
 {
-run_child_cleanups(p->cleanups);
-p->cleanups = NULL;
+run_child_cleanups(p);
 
 for (p = p->sub_pools; p; p = p->sub_next) {
cleanup_pool_for_exec(p);
@@ -863,8 +873,7 @@
 }
 
 /* run cleanups and free any subprocesses. */
-run_cleanups(a->cleanups);
-a->cleanups = NULL;
+run_cleanups(a);
 free_proc_chain(a->subprocesses);
 a->subprocesses = NULL;
 



Re: [PATCH] update to default worker MPM config to match MaxClients fix

2001-09-20 Thread Bill Stoddard

>
> > -Original Message-
> > From: Aaron Bannert [mailto:[EMAIL PROTECTED]]
>
> >  
> > -StartServers 3
> > -MaxClients   8
> > -MinSpareThreads  5
> > +StartServers 2
> > +MaxClients 150
> > +MinSpareThreads 25
> >  MaxSpareThreads 75
> >  ThreadsPerChild 25
> >  MaxRequestsPerChild  0
>
> I think this is going in the right direction.  Two comments:
>
> 1. MinSpareThreads is way too high. There is no reason to have 25 idle
> threads hanging around at all times.  The original figure of 5 seems fine to
> me.
>
> 2. Naming:
> I think we should define Server="thing capable of serving requests" and
> completely get rid of "Child" which is ambiguous.  Then we can change
> MinSpareThreads -> MinSpareServers
> MaxSpareThreads -> MaxSpareServers
> StartServers -> StartProcesses
> ThreadsPerChild -> ThreadsPerProcess

> MaxRequestsPerChild -> MaxRequestsPerServer

This last one is inconsistent with your other changes.  In the threaded MPM, a 
'Server' by
your defn is a thread. MaxRequestsPerChild is used to limit the number of requests a
'process' serves before going away.

In past discussions, we have almost settled on the notion of a "worker" as being the 
thing
capable of serving a request.

MinSpareWorkers
MaxSpareWorkers

StartWorkers - ??? What do we want the option to do? Startup this number of worker 
threads
or startup this number of child processes?

WorkersPerProcess
MaxRequestsPerChild (or MaxRequestsPerProcess)


Okay, changing topics only slightly... how about we replace
MinSpare[Threads|Servers|Workers] and MaxSpare[Threads|Servers|workers] with a single
directive, Spare[Threads|Servers|Workers]?

Bill
Bill





RE: [PATCH] update to default worker MPM config to match MaxClients fix

2001-09-20 Thread Joshua Slive



> -Original Message-
> From: Ryan Bloom [mailto:[EMAIL PROTECTED]]
>
> I like the idea of changing StartServers to StartProcesses, and Min/Max
> SpareThreads to Min/Max SpareServers.  We do not want to change
> MaxRequestsPerChild though, because we are still talking about the maximum
> number of requests each child process will server.  In threaded
> and worker,
> we count requests for the whole child process, not for each
> thread.  I also would
> not change ThreadsPerChild, because we are talking about the
> number of threads
> in each child process.

Yes, but, at least to me, "Child" does not naturally imply "Child Process".
When thinking about a threaded MPM, "child" could just as naturally mean
"thread".  So my suggestion for MaxRequestsPerChild should be
MaxRequestsPerProcess and ThreadsPerChild should be ThreadsPerProcess.
Perhaps the child->process mapping is clearer to others.

Joshua.




Re: [PATCH] get TRACE to work again

2001-09-20 Thread William A. Rowe, Jr.

From: "Jeff Trawick" <[EMAIL PROTECTED]>
Sent: Thursday, September 20, 2001 12:38 PM


> Currently, when the map-to-storage handler for TRACE returns DONE, the
> caller -- ap_process_request_internal() -- catches that and returns
> OK to its caller -- ap_process_request().  But ap_process_request(),
> seeing OK, tries to run a handler.  It needs to skip that if the
> request was completed in ap_process_request_internal().

Yuck, my bad.

> So what am I missing :)

Nothing, please commit.

> Index: modules/http/http_request.c
> ===
> RCS file: /home/cvspublic/httpd-2.0/modules/http/http_request.c,v
> retrieving revision 1.114
> diff -u -r1.114 http_request.c
> --- modules/http/http_request.c 2001/09/19 05:52:42 1.114
> +++ modules/http/http_request.c 2001/09/20 17:26:35
> @@ -284,6 +284,10 @@
>  access_status = ap_process_request_internal(r);
>  if (access_status == OK)
>  access_status = ap_invoke_handler(r);
> +else if (access_status == DONE) {
> +/* e.g., something not in storage like TRACE */
> +access_status = OK;
> +}
>  }
>  
>  if (access_status == OK) {
> Index: server/request.c
> ===
> RCS file: /home/cvspublic/httpd-2.0/server/request.c,v
> retrieving revision 1.50
> diff -u -r1.50 request.c
> --- server/request.c 2001/09/06 17:58:28 1.50
> +++ server/request.c 2001/09/20 17:26:38
> @@ -162,10 +162,7 @@
>  
>  if ((access_status = ap_run_map_to_storage(r))) {
>  /* This request wasn't in storage (e.g. TRACE) */
> -if (access_status == DONE)
> - return OK;
> - else
> -return access_status;
> +return access_status;
>  }
>  
>  if ((access_status = ap_location_walk(r))) {
> 
> 
> -- 
> Jeff Trawick | [EMAIL PROTECTED] | PGP public key at web site:
>http://www.geocities.com/SiliconValley/Park/9289/
>  Born in Roswell... married an alien...
> 




Re: [PATCH] update to default worker MPM config to match MaxClients fix

2001-09-20 Thread Aaron Bannert

On Thu, Sep 20, 2001 at 10:51:16AM -0700, Ryan Bloom wrote:
> This has been discussed a lot on list, but we never really come to a conclusion.
> I would suggest that we just change the names, and let the flames fall where
> they may.
> 
> I like the idea of changing StartServers to StartProcesses, and Min/Max 
> SpareThreads to Min/Max SpareServers.  We do not want to change 
> MaxRequestsPerChild though, because we are still talking about the maximum
> number of requests each child process will server.  In threaded and worker,
> we count requests for the whole child process, not for each thread.  I also would
> not change ThreadsPerChild, because we are talking about the number of threads
> in each child process.

If we are going to only change names and not definitions (which is what
I think Ryan is suggesting), then I'd rather we did it after this patch
goes through.

-aaron



Re: [PATCH] update to default worker MPM config to match MaxClients fix

2001-09-20 Thread Ryan Bloom

On Thursday 20 September 2001 10:44 am, Aaron Bannert wrote:
> On Thu, Sep 20, 2001 at 01:19:39PM -0400, Joshua Slive wrote:
>
> > 2. Naming:
> > I think we should define Server="thing capable of serving requests" and
> > completely get rid of "Child" which is ambiguous.  Then we can change
> > MinSpareThreads -> MinSpareServers
> > MaxSpareThreads -> MaxSpareServers
> > StartServers -> StartProcesses
> > ThreadsPerChild -> ThreadsPerProcess
> > MaxRequestsPerChild -> MaxRequestsPerServer
> >
> > The first two are clearly better because they are more consistent with
> > prefork and easier to understand.
> >
> > The third one is less consistent with prefork, but is much less
> > ambiguous.
> >
> > The last one I'm not sure of, because I don't know whether this is
> > actually measured per thread or per process.  Perhaps it should be
> > MaxRequestsPerProcess.
> >
> > This has been hashed over already a couple times.  I hope what I am
> > proposing here is close to what we were talking about before.  I know
> > there was a suggestion to use "worker" for what I am using "server" for.
>
> I'm going to stay out of this one. I just spent the last few days trying to
> force those square-peg names we have into the round hole in my head, so
> you've got no complaints from me. It might suit us better, however, if we
> try to do a higher-level evaluation of all the MPM directives (especially
> their definitions in our docs and config comments).

This has been discussed a lot on list, but we never really come to a conclusion.
I would suggest that we just change the names, and let the flames fall where
they may.

I like the idea of changing StartServers to StartProcesses, and Min/Max 
SpareThreads to Min/Max SpareServers.  We do not want to change 
MaxRequestsPerChild though, because we are still talking about the maximum
number of requests each child process will server.  In threaded and worker,
we count requests for the whole child process, not for each thread.  I also would
not change ThreadsPerChild, because we are talking about the number of threads
in each child process.

Ryan

__
Ryan Bloom  [EMAIL PROTECTED]
Covalent Technologies   [EMAIL PROTECTED]
--



Re: [PATCH] update to default worker MPM config to match MaxClients fix

2001-09-20 Thread Aaron Bannert

On Thu, Sep 20, 2001 at 01:19:39PM -0400, Joshua Slive wrote:
> 
> 
> > -Original Message-
> > From: Aaron Bannert [mailto:[EMAIL PROTECTED]]
> 
> >  
> > -StartServers 3
> > -MaxClients   8
> > -MinSpareThreads  5
> > +StartServers 2
> > +MaxClients 150
> > +MinSpareThreads 25
> >  MaxSpareThreads 75
> >  ThreadsPerChild 25
> >  MaxRequestsPerChild  0
> 
> I think this is going in the right direction.  Two comments:
> 
> 1. MinSpareThreads is way too high. There is no reason to have 25 idle
> threads hanging around at all times.  The original figure of 5 seems fine to
> me.

Threads are cheap, but child processes are expensive. You may be correct,
but I wanted to make sure that this thing responds quickly to load spikes
(that is afterall what the worker MPM is for -- scalability on big iron :).
I would not object to a lower MinSpareThreads value.

> 2. Naming:
> I think we should define Server="thing capable of serving requests" and
> completely get rid of "Child" which is ambiguous.  Then we can change
> MinSpareThreads -> MinSpareServers
> MaxSpareThreads -> MaxSpareServers
> StartServers -> StartProcesses
> ThreadsPerChild -> ThreadsPerProcess
> MaxRequestsPerChild -> MaxRequestsPerServer
> 
> The first two are clearly better because they are more consistent with
> prefork and easier to understand.
> 
> The third one is less consistent with prefork, but is much less ambiguous.
> 
> The last one I'm not sure of, because I don't know whether this is actually
> measured per thread or per process.  Perhaps it should be
> MaxRequestsPerProcess.
> 
> This has been hashed over already a couple times.  I hope what I am
> proposing here is close to what we were talking about before.  I know there
> was a suggestion to use "worker" for what I am using "server" for.

I'm going to stay out of this one. I just spent the last few days trying to
force those square-peg names we have into the round hole in my head, so
you've got no complaints from me. It might suit us better, however, if we
try to do a higher-level evaluation of all the MPM directives (especially
their definitions in our docs and config comments).

-aaron



[PATCH] get TRACE to work again

2001-09-20 Thread Jeff Trawick

Currently, when the map-to-storage handler for TRACE returns DONE, the
caller -- ap_process_request_internal() -- catches that and returns
OK to its caller -- ap_process_request().  But ap_process_request(),
seeing OK, tries to run a handler.  It needs to skip that if the
request was completed in ap_process_request_internal().

So what am I missing :)

Index: modules/http/http_request.c
===
RCS file: /home/cvspublic/httpd-2.0/modules/http/http_request.c,v
retrieving revision 1.114
diff -u -r1.114 http_request.c
--- modules/http/http_request.c 2001/09/19 05:52:42 1.114
+++ modules/http/http_request.c 2001/09/20 17:26:35
@@ -284,6 +284,10 @@
 access_status = ap_process_request_internal(r);
 if (access_status == OK)
 access_status = ap_invoke_handler(r);
+else if (access_status == DONE) {
+/* e.g., something not in storage like TRACE */
+access_status = OK;
+}
 }
 
 if (access_status == OK) {
Index: server/request.c
===
RCS file: /home/cvspublic/httpd-2.0/server/request.c,v
retrieving revision 1.50
diff -u -r1.50 request.c
--- server/request.c2001/09/06 17:58:28 1.50
+++ server/request.c2001/09/20 17:26:38
@@ -162,10 +162,7 @@
 
 if ((access_status = ap_run_map_to_storage(r))) {
 /* This request wasn't in storage (e.g. TRACE) */
-if (access_status == DONE)
-   return OK;
-   else
-return access_status;
+return access_status;
 }
 
 if ((access_status = ap_location_walk(r))) {


-- 
Jeff Trawick | [EMAIL PROTECTED] | PGP public key at web site:
   http://www.geocities.com/SiliconValley/Park/9289/
 Born in Roswell... married an alien...



RE: [PATCH] update to default worker MPM config to match MaxClients fix

2001-09-20 Thread Joshua Slive



> -Original Message-
> From: Aaron Bannert [mailto:[EMAIL PROTECTED]]

>  
> -StartServers 3
> -MaxClients   8
> -MinSpareThreads  5
> +StartServers 2
> +MaxClients 150
> +MinSpareThreads 25
>  MaxSpareThreads 75
>  ThreadsPerChild 25
>  MaxRequestsPerChild  0

I think this is going in the right direction.  Two comments:

1. MinSpareThreads is way too high. There is no reason to have 25 idle
threads hanging around at all times.  The original figure of 5 seems fine to
me.

2. Naming:
I think we should define Server="thing capable of serving requests" and
completely get rid of "Child" which is ambiguous.  Then we can change
MinSpareThreads -> MinSpareServers
MaxSpareThreads -> MaxSpareServers
StartServers -> StartProcesses
ThreadsPerChild -> ThreadsPerProcess
MaxRequestsPerChild -> MaxRequestsPerServer

The first two are clearly better because they are more consistent with
prefork and easier to understand.

The third one is less consistent with prefork, but is much less ambiguous.

The last one I'm not sure of, because I don't know whether this is actually
measured per thread or per process.  Perhaps it should be
MaxRequestsPerProcess.

This has been hashed over already a couple times.  I hope what I am
proposing here is close to what we were talking about before.  I know there
was a suggestion to use "worker" for what I am using "server" for.

Joshua.




[PATCH] update to default worker MPM config to match MaxClients fix

2001-09-20 Thread Aaron Bannert

Here's the config update I promised. As I mentioned earlier, this
should bring the behavior of the worker MPM in line with prefork
and the common definitions of these directives.

These defaults are of course not set in stone. If anyone has a better
idea how to get the best results from some default worker MPM params,
feel free to update this -- I'm mostly interested in changing the comment
and the default MaxClients value.

-aaron


Index: docs/conf/httpd-std.conf
===
RCS file: /home/cvspublic/httpd-2.0/docs/conf/httpd-std.conf,v
retrieving revision 1.49
diff -u -r1.49 httpd-std.conf
--- docs/conf/httpd-std.conf2001/09/16 19:15:59 1.49
+++ docs/conf/httpd-std.conf2001/09/20 16:39:37
@@ -132,15 +132,15 @@
 
 # worker MPM
 # StartServers: initial number of server processes to start
-# MaxClients: maximum number of server processes allowed to start
+# MaxClients: maximum number of simultaneous client connections
 # MinSpareThreads: minimum number of worker threads which are kept spare
 # MaxSpareThreads: maximum number of worker threads which are kept spare
 # ThreadsPerChild: constant number of worker threads in each server process
 # MaxRequestsPerChild: maximum number of requests a server process serves
 
-StartServers 3
-MaxClients   8
-MinSpareThreads  5
+StartServers 2
+MaxClients 150
+MinSpareThreads 25
 MaxSpareThreads 75 
 ThreadsPerChild 25
 MaxRequestsPerChild  0



Re: cvs commit: httpd-proxy/module-2.0 CHANGES mod_proxy.c mod_proxy.h proxy_http.c

2001-09-20 Thread Ian Holsman

On Thu, 2001-09-20 at 02:05, Graham Leggett wrote:
> [EMAIL PROTECTED] wrote:
> 
> >   Added New Option 'HTTPProxyOverrideReturnedErrors' which lets the
> server override
> >   the error pages returned from the proxied server and replace them
> with the standard
> >   server error handling on the main server.
> 
> I don't like the name of the option - it should start with Proxy* in
> order to be consistent with the other options.
> 
> Something like "ProxyErrorOverride"?
> 
> Is there a reason this option doesn't also work with FTP? (Admittedly I
> haven't looked at the ftp code for a while, I'm not sure if it would
> make sense, but for consistency I think it should work for both).
I'm not sure how the FTP error handling works, and how the HTTP server
would handle the error mapping to HTTP codes. so I'm not sure if it is
applicable.

The option should be setup/configured I guess in the proxy_http.c code
but I'm not sure how that will work with all the other config stuff.

> 
> Regards,
> Graham
> -- 
> -
> [EMAIL PROTECTED]  "There's a moon
>   over Bourbon Street
>   tonight..."
-- 
Ian Holsman  [EMAIL PROTECTED]
Performance Measurement & Analysis
CNET Networks   -   (415) 364-8608




[PATCH] fix MaxClients to match definition in worker MPM

2001-09-20 Thread Aaron Bannert

I've been told by numerous people that MaxClients is defined as the
Maximum number of concurrent connections that the server is allowed
to handle. This patch makes the worker MPM to match that definition.

1) At the pre_config stage, it traverses the config tree and makes sure
  that ThreadsPerChild is set before MaxClients. I opted for copying
  the data rather than moving pointers (less error prone, the data is
  all pointers so it's quick, and it only happens once at starttime, so NBD).

2) As the directive runs set_server_limit(), a few extra checks
   and calculations happen:

  a) MaxClients must be greater than ThreadsPerChild
  b) A warning is issued if MaxClients is not a multiple of ThreadsPerChild
  c) ap_daemons_limit is calculated as the integer truncation of
  (MaxClients / ThreadsPerChild)
  d) Finally, the original check to make sure ap_daemons_limit does not
 exceed HARD_SERVER_LIMIT.

 (Note: none of the checks are fatal. Each produce a warning and continue
with an approximated setting which is described in the warning.)


I shall follow up this posting with another patch to change the default
httpd.conf, as well as change the default worker MPM defaults to something
sane. For now, if you'd like to test this, I suggest:


StartServers 2
MaxClients 150
MinSpareThreads 25
MaxSpareThreads 50
ThreadsPerChild 25
MaxRequestsPerChild  0


(This config should be consistent with the prefork MPM now.)

-aaron


Index: server/mpm/worker/worker.c
===
RCS file: /home/cvspublic/httpd-2.0/server/mpm/worker/worker.c,v
retrieving revision 1.26
diff -u -r1.26 worker.c
--- server/mpm/worker/worker.c  2001/09/19 18:47:31 1.26
+++ server/mpm/worker/worker.c  2001/09/20 16:09:02
@@ -1422,7 +1422,45 @@
 {
 static int restart_num = 0;
 int no_detach, debug;
+ap_directive_t *pdir;
+ap_directive_t *max_clients = NULL;
 
+/* make sure that "ThreadsPerChild" gets set before "MaxClients" */
+for (pdir = ap_conftree; pdir != NULL; pdir = pdir->next) {
+if (strncasecmp(pdir->directive, "ThreadsPerChild", 15) == 0) {
+if (!max_clients) {
+break; /* we're in the clear, got ThreadsPerChild first */
+}
+else {
+/* now to swap the data */
+ap_directive_t temp;
+
+temp.directive = pdir->directive;
+temp.args = pdir->args;
+/* Make sure you don't change 'next', or you may get loops! */
+/* XXX: first_child, parent, and data can never be set
+ * for these directives, right? -aaron */
+temp.filename = pdir->filename;
+temp.line_num = pdir->line_num;
+
+pdir->directive = max_clients->directive;
+pdir->args = max_clients->args;
+pdir->filename = max_clients->filename;
+pdir->line_num = max_clients->line_num;
+
+max_clients->directive = temp.directive;
+max_clients->args = temp.args;
+max_clients->filename = temp.filename;
+max_clients->line_num = temp.line_num;
+break;
+}
+}
+else if (!max_clients
+&& strncasecmp(pdir->directive, "MaxClients", 10) == 0) {
+max_clients = pdir;
+}
+}
+
 debug = ap_exists_config_define("DEBUG");
 
 if (debug)
@@ -1515,21 +1553,52 @@
 static const char *set_server_limit (cmd_parms *cmd, void *dummy,
 const char *arg) 
 {
+int max_clients;
 const char *err = ap_check_cmd_context(cmd, GLOBAL_ONLY);
 if (err != NULL) {
 return err;
 }
 
-ap_daemons_limit = atoi(arg);
+/* It is ok to use ap_threads_per_child here because we are
+ * sure that it gets set before MaxClients in the pre_config stage. */
+max_clients = atoi(arg);
+if (max_clients < ap_threads_per_child) {
+   ap_log_error(APLOG_MARK, APLOG_STARTUP | APLOG_NOERRNO, 0, NULL, 
+"WARNING: MaxClients (%d) must be at least as large",
+max_clients);
+   ap_log_error(APLOG_MARK, APLOG_STARTUP | APLOG_NOERRNO, 0, NULL, 
+" large as ThreadsPerChild (%d). Automatically",
+ap_threads_per_child);
+   ap_log_error(APLOG_MARK, APLOG_STARTUP | APLOG_NOERRNO, 0, NULL, 
+" increasing MaxClients to %d.",
+ap_threads_per_child);
+   max_clients = ap_threads_per_child;
+}
+ap_daemons_limit = max_clients / ap_threads_per_child;
+if ((max_clients > 0) && (max_clients % ap_threads_per_child)) {
+   ap_log_error(APLOG_MARK, APLOG_STARTUP | APLOG_NOERRNO, 0, NULL, 
+"WARNING: MaxClients (%d) is not an integer multiple",
+max_clients);
+

Re: server reached MaxClients setting

2001-09-20 Thread Jim Jagielski

Sascha Schumann wrote:
> 
> On Thu, 20 Sep 2001, Jim Jagielski wrote:
> 
> > At 11:33 AM -0300 9/20/01, Daniel Abad wrote:
> > >Is it really a problem??? Or just warning?
> > >
> > >[Thu Sep 20 00:28:53 2001] [error] server reached MaxClients setting,
> > >consider raising the MaxClients setting
> > >
> >
> > No doubt, you are getting hammered by Nimba causing your server to
> > spawn extra processes to handle the increased load... So actually,
> > it's a *good* thing since it's preventing the attack from
> > consuming all your server resources.
> 
> Or he is serving large files to slow clients, so that a lot
> of Apache processes are blocked for a longer period of time.
> That happens regularly to www.php.net with MaxClients 256.
> One way to fix that is recompiling Apache to handle even more
> clients, but that increases the overall RAM usage of course.
> So we usually just choose the lazy route, install an
> additional thttpd and redirect requests as appropiate.
> 

Hell it could be a ton of things (someone uploaded a nude picture
of Bridget Fonda on his server)... 

-- 
===
   Jim Jagielski   [|]   [EMAIL PROTECTED]   [|]   http://www.jaguNET.com/
  "A society that will trade a little liberty for a little order
   will lose both and deserve neither"



Re: server reached MaxClients setting

2001-09-20 Thread Aaron Bannert

On Thu, Sep 20, 2001 at 11:33:10AM -0300, Daniel Abad wrote:
> Is it really a problem??? Or just warning?
> 
> [Thu Sep 20 00:28:53 2001] [error] server reached MaxClients setting,
> consider raising the MaxClients setting

What MPM are you using (or are you using 1.3)?

-aaron



RES: server reached MaxClients setting

2001-09-20 Thread Daniel Abad

See what happens if it increased..

[Thu Sep 20 00:13:05 2001] [error] (35)Resource temporarily unavailable:
fork: Unable to fork new process

Dan

-Mensagem original-
De: Paul Hooper [mailto:[EMAIL PROTECTED]]
Enviada em: Quinta-feira, 20 de Setembro de 2001 11:45
Para: '[EMAIL PROTECTED]'
Assunto: RE: server reached MaxClients setting


Warning only - your MaxClients directive is set too low.  Increasing it will
strip this message from your server.

-Original Message-
From: Daniel Abad [mailto:[EMAIL PROTECTED]]
Sent: 20 September 2001 15:33
To: '[EMAIL PROTECTED]'
Subject: server reached MaxClients setting


Is it really a problem??? Or just warning?

[Thu Sep 20 00:28:53 2001] [error] server reached MaxClients setting,
consider raising the MaxClients setting


Tks.


Dan



NOTICE AND DISCLAIMER:
This email (including attachments) is confidential.  If you have received
this email in error please notify the sender immediately and delete this
email from your system without copying or disseminating it or placing any
reliance upon its contents.  We cannot accept liability for any breaches of
confidence arising through use of email.  Any opinions expressed in this
email (including attachments) are those of the author and do not necessarily
reflect our opinions.  We will not accept responsibility for any commitments
made by our employees outside the scope of our business.  We do not warrant
the accuracy or completeness of such information.



Re: server reached MaxClients setting

2001-09-20 Thread Sascha Schumann

On Thu, 20 Sep 2001, Jim Jagielski wrote:

> At 11:33 AM -0300 9/20/01, Daniel Abad wrote:
> >Is it really a problem??? Or just warning?
> >
> >[Thu Sep 20 00:28:53 2001] [error] server reached MaxClients setting,
> >consider raising the MaxClients setting
> >
>
> No doubt, you are getting hammered by Nimba causing your server to
> spawn extra processes to handle the increased load... So actually,
> it's a *good* thing since it's preventing the attack from
> consuming all your server resources.

Or he is serving large files to slow clients, so that a lot
of Apache processes are blocked for a longer period of time.
That happens regularly to www.php.net with MaxClients 256.
One way to fix that is recompiling Apache to handle even more
clients, but that increases the overall RAM usage of course.
So we usually just choose the lazy route, install an
additional thttpd and redirect requests as appropiate.

- Sascha Experience IRCG
  http://schumann.cx/http://schumann.cx/ircg




Re: server reached MaxClients setting

2001-09-20 Thread Jim Jagielski

At 11:33 AM -0300 9/20/01, Daniel Abad wrote:
>Is it really a problem??? Or just warning?
>
>[Thu Sep 20 00:28:53 2001] [error] server reached MaxClients setting,
>consider raising the MaxClients setting
>

No doubt, you are getting hammered by Nimba causing your server to
spawn extra processes to handle the increased load... So actually,
it's a *good* thing since it's preventing the attack from
consuming all your server resources.
-- 
===
   Jim Jagielski   [|]   [EMAIL PROTECTED]   [|]   http://www.jaguNET.com/
  "A society that will trade a little liberty for a little order
   will lose both and deserve neither"



Re: [PATCH] Standardize AcceptMutex config

2001-09-20 Thread Aaron Bannert

Now we're just decreasing the signal-to-noise ratio. :)

-aaron


On Thu, Sep 20, 2001 at 09:45:57AM -0400, Bill Stoddard wrote:
> Ooops! And the list grows with each post we make :-)
> 
> proc_pthread, proc_pthread, proc_pthread...
> 
> Bill
> 
> > On Wednesday 19 September 2001 09:27 pm, Bill Stoddard wrote:
> > > proc_thread doesn't tell me anything. If I google for proc_thread, I get no
> > > hits. If I google pthread, I at least get hits that I can search through to
> > > find anything to do with a 'lock'. pthread is easier to read than
> > > proc_thread. Yea, not great arguments for using pthread, but at least as
> > > strong as arguments to use proc_thread.
> > 
> > what if we went with proc_pthread?  I just googled it, and there is a page
> >  of hits, all related to this very subject.  :-)
> > 
> > Ryan
> > 
> > >
> > > Bill
> > >
> > > - Original Message -
> > > From: "Ryan Bloom" <[EMAIL PROTECTED]>
> > > To: <[EMAIL PROTECTED]>; "Bill Stoddard" <[EMAIL PROTECTED]>
> > > Sent: Wednesday, September 19, 2001 11:59 PM
> > > Subject: Re: [PATCH] Standardize AcceptMutex config
> > >
> > > > On Wednesday 19 September 2001 08:56 pm, Bill Stoddard wrote:
> > > > > > On Wed, Sep 19, 2001 at 07:53:47PM -0700, Ryan Bloom wrote:
> > > > > > > Why is calling it proc_pthread silly?  We are talking about a
> > > > > > > pthread based process lock.  Personally, I think Apache 1.3 should
> > > > > > > be changed, especially since it hasn't been released yet.  My
> > > > > > > concern is that calling it a pthread lock makes it sound like we
> > > > > > > are just locking threads.
> > > > > >
> > > > > > Fine, change one or the other.  Having it inconsistent is *silly*.
> > > > > > Personally, I think Apache 1.3's pthread makes more sense given the
> > > > > > context.  -- justin
> > > > >
> > > > > I agree with Justin.
> > > >
> > > > That's fine, could you please explain why?  I am trying to understand
> > > > this POV.  Why don't you think that calling out the proc part is
> > > > important? I don't mind being wrong, but I do mind not knowing why I am
> > > > wrong.  :-)
> > > >
> > > > Ryan
> > > >
> > > > __
> > > > Ryan Bloom [EMAIL PROTECTED]
> > > > Covalent Technologies [EMAIL PROTECTED]
> > > > --
> > 
> > -- 
> > 
> > __
> > Ryan Bloom [EMAIL PROTECTED]
> > Covalent Technologies [EMAIL PROTECTED]
> > --
> > 



Re: cvs commit: httpd-2.0/server/mpm/worker worker.c

2001-09-20 Thread Aaron Bannert

On Thu, Sep 20, 2001 at 01:04:48AM -0700, Justin Erenkrantz wrote:
> On Thu, Sep 20, 2001 at 01:00:09AM -0700, Greg Stein wrote:
> > >...
> > > Whoever does the software behind apache-mbox (I take it this is 
> > > mod_mbox?) might want to take note that it's spitting out invalid URLs..
> > 
> > The URLs produced by mod_mbox are fine. Aaron must have posted an unescaped
> > version of the URL.
> 
> I have a feeling Aaron manually generated the URL.  -- justin

Huh? No way, that was the one from my Location: bar in Netscape.

-aaron




RE: server reached MaxClients setting

2001-09-20 Thread Paul Hooper

Warning only - your MaxClients directive is set too low.  Increasing it will
strip this message from your server.

-Original Message-
From: Daniel Abad [mailto:[EMAIL PROTECTED]]
Sent: 20 September 2001 15:33
To: '[EMAIL PROTECTED]'
Subject: server reached MaxClients setting


Is it really a problem??? Or just warning?

[Thu Sep 20 00:28:53 2001] [error] server reached MaxClients setting,
consider raising the MaxClients setting


Tks.


Dan



NOTICE AND DISCLAIMER:
This email (including attachments) is confidential.  If you have received
this email in error please notify the sender immediately and delete this
email from your system without copying or disseminating it or placing any
reliance upon its contents.  We cannot accept liability for any breaches of
confidence arising through use of email.  Any opinions expressed in this
email (including attachments) are those of the author and do not necessarily
reflect our opinions.  We will not accept responsibility for any commitments
made by our employees outside the scope of our business.  We do not warrant
the accuracy or completeness of such information.




server reached MaxClients setting

2001-09-20 Thread Daniel Abad

Is it really a problem??? Or just warning?

[Thu Sep 20 00:28:53 2001] [error] server reached MaxClients setting,
consider raising the MaxClients setting


Tks.


Dan



Re: pool cleanup (was: Re: New post-log-transaction hook?)

2001-09-20 Thread Ryan Bloom

On Wednesday 19 September 2001 02:21 pm, Greg Stein wrote:
> On Wed, Sep 19, 2001 at 12:16:24PM -0700, Ryan Bloom wrote:
> > On Wednesday 19 September 2001 11:37 am, William A. Rowe, Jr. wrote:
> > > From: "Greg Stein" <[EMAIL PROTECTED]>
> > > Sent: Wednesday, September 19, 2001 1:26 PM
> > > Really?  No.  Cleanups are run as a LIFO stack.  Anything that existed
> > > when something was added to the pool must exist when that something is
> > > removed from the pool.
>
> They are not strictly LIFO. You can remove a cleanup and insert a new one
> at any time. Let's say that the cleanup list looked like:
>
> cleanups: A
>
> and you add a new one to the "front":
>
> cleanups: B A
>
> and now case 1, where A needs to rejigger its cleanup param a bit:
>
> cleanups: A' B
>
> or case 2, where A simply removes its cleanup:
>
> cleanups: B
>
>
> Case 2 actually happens quite often.

This is all true, but it is also orthogonal to this conversation. The question we are
trying to answer here, is can you register a cleanup within a cleanup. If we are in
the middle of running the cleanups, and somebody actually calls cleanup_run 
or cleanup_kill from within a cleanup, they are broken and it may not work.
It also doesn't make any sense, because the reason to run a cleanup, is to perform
some action sooner than you would have otherwise, but in this case, we are going
to perform that action in a few seconds anyway.

Since the two cases above require a programer to either remove or run a cleanup,
they don't really make sense in the context of registering a cleanup within a cleanup.
This means that is safe to register a cleanup within a cleanup, assuming the code
is patched correctly.

Ryan
__
Ryan Bloom  [EMAIL PROTECTED]
Covalent Technologies   [EMAIL PROTECTED]
--



Re: [PATCH] Standardize AcceptMutex config

2001-09-20 Thread Ryan Bloom

On Thursday 20 September 2001 05:26 am, Jim Jagielski wrote:

Okay, with three people against me, I stand corrected.  Please, let's change
the 2.0 version to just pthread.  :-)

Ryan

> Ryan Bloom wrote:
> > Why is calling it proc_pthread silly?  We are talking about a pthread
> > based process lock.  Personally, I think Apache 1.3 should be changed,
> > especially since it hasn't been released yet.  My concern is that calling
> > it a pthread lock makes it sound like we are just locking threads.
>
> I think calling it proc_pthread under 1.3 is pretty silly actually.
> Unless we call all the others proc_flock, etc.. We're talking about
> how we mutex the accept, and we're using pthread locking for that.
> When we say 'sysvsem' we're not lockings semaphores :)

-- 

__
Ryan Bloom  [EMAIL PROTECTED]
Covalent Technologies   [EMAIL PROTECTED]
--



Re: [PATCH] Standardize AcceptMutex config

2001-09-20 Thread Bill Stoddard

Ooops! And the list grows with each post we make :-)

proc_pthread, proc_pthread, proc_pthread...

Bill

> On Wednesday 19 September 2001 09:27 pm, Bill Stoddard wrote:
> > proc_thread doesn't tell me anything. If I google for proc_thread, I get no
> > hits. If I google pthread, I at least get hits that I can search through to
> > find anything to do with a 'lock'. pthread is easier to read than
> > proc_thread. Yea, not great arguments for using pthread, but at least as
> > strong as arguments to use proc_thread.
> 
> what if we went with proc_pthread?  I just googled it, and there is a page
>  of hits, all related to this very subject.  :-)
> 
> Ryan
> 
> >
> > Bill
> >
> > - Original Message -
> > From: "Ryan Bloom" <[EMAIL PROTECTED]>
> > To: <[EMAIL PROTECTED]>; "Bill Stoddard" <[EMAIL PROTECTED]>
> > Sent: Wednesday, September 19, 2001 11:59 PM
> > Subject: Re: [PATCH] Standardize AcceptMutex config
> >
> > > On Wednesday 19 September 2001 08:56 pm, Bill Stoddard wrote:
> > > > > On Wed, Sep 19, 2001 at 07:53:47PM -0700, Ryan Bloom wrote:
> > > > > > Why is calling it proc_pthread silly?  We are talking about a
> > > > > > pthread based process lock.  Personally, I think Apache 1.3 should
> > > > > > be changed, especially since it hasn't been released yet.  My
> > > > > > concern is that calling it a pthread lock makes it sound like we
> > > > > > are just locking threads.
> > > > >
> > > > > Fine, change one or the other.  Having it inconsistent is *silly*.
> > > > > Personally, I think Apache 1.3's pthread makes more sense given the
> > > > > context.  -- justin
> > > >
> > > > I agree with Justin.
> > >
> > > That's fine, could you please explain why?  I am trying to understand
> > > this POV.  Why don't you think that calling out the proc part is
> > > important? I don't mind being wrong, but I do mind not knowing why I am
> > > wrong.  :-)
> > >
> > > Ryan
> > >
> > > __
> > > Ryan Bloom [EMAIL PROTECTED]
> > > Covalent Technologies [EMAIL PROTECTED]
> > > --
> 
> -- 
> 
> __
> Ryan Bloom [EMAIL PROTECTED]
> Covalent Technologies [EMAIL PROTECTED]
> --
> 




Re: [PATCH] Standardize AcceptMutex config

2001-09-20 Thread Ryan Bloom

On Wednesday 19 September 2001 09:27 pm, Bill Stoddard wrote:
> proc_thread doesn't tell me anything. If I google for proc_thread, I get no
> hits. If I google pthread, I at least get hits that I can search through to
> find anything to do with a 'lock'. pthread is easier to read than
> proc_thread. Yea, not great arguments for using pthread, but at least as
> strong as arguments to use proc_thread.

what if we went with proc_pthread?  I just googled it, and there is a page
 of hits, all related to this very subject.  :-)

Ryan

>
> Bill
>
> - Original Message -
> From: "Ryan Bloom" <[EMAIL PROTECTED]>
> To: <[EMAIL PROTECTED]>; "Bill Stoddard" <[EMAIL PROTECTED]>
> Sent: Wednesday, September 19, 2001 11:59 PM
> Subject: Re: [PATCH] Standardize AcceptMutex config
>
> > On Wednesday 19 September 2001 08:56 pm, Bill Stoddard wrote:
> > > > On Wed, Sep 19, 2001 at 07:53:47PM -0700, Ryan Bloom wrote:
> > > > > Why is calling it proc_pthread silly?  We are talking about a
> > > > > pthread based process lock.  Personally, I think Apache 1.3 should
> > > > > be changed, especially since it hasn't been released yet.  My
> > > > > concern is that calling it a pthread lock makes it sound like we
> > > > > are just locking threads.
> > > >
> > > > Fine, change one or the other.  Having it inconsistent is *silly*.
> > > > Personally, I think Apache 1.3's pthread makes more sense given the
> > > > context.  -- justin
> > >
> > > I agree with Justin.
> >
> > That's fine, could you please explain why?  I am trying to understand
> > this POV.  Why don't you think that calling out the proc part is
> > important? I don't mind being wrong, but I do mind not knowing why I am
> > wrong.  :-)
> >
> > Ryan
> >
> > __
> > Ryan Bloom [EMAIL PROTECTED]
> > Covalent Technologies [EMAIL PROTECTED]
> > --

-- 

__
Ryan Bloom  [EMAIL PROTECTED]
Covalent Technologies   [EMAIL PROTECTED]
--



Re: Q1: Rollup Release Format - Score So Far...

2001-09-20 Thread Alex Stewart

Rodent of Unusual Size wrote:

> Graham Leggett wrote:
> 
>>But consensus has just been reached that there will be a
>>single rollup release, so out of necessity there will
>>have to be one version per release.
>>
> 
> That is a consensus that was built quite quickly, so it
> is certainly non-binding if new data suggest it is not
> the best alternative.
> 

Just for clarification here, I would like to point out that I'm all in 
favor of the apparent consensus regarding the single rollup release, and 
nothing in my response was intended to change that in any way.

It had appeared that, given the consensus on the "what" (single rollup), 
people had started to move on to the "how" (CVS tagging and rollup 
procedures), and I was responding to some of the details of that topic.

My point was really that _given_ there's going to be a single rollup 
with a particular release number, that rollup release number doesn't 
necessarily have to be tied to the version numbers of the multiple 
components that are in it, and it might be easier if it wasn't.

Sorry if there was any confusion..

-alex




Re: [PATCH] Standardize AcceptMutex config

2001-09-20 Thread Jim Jagielski

Ryan Bloom wrote:
> 
> Why is calling it proc_pthread silly?  We are talking about a pthread based
> process lock.  Personally, I think Apache 1.3 should be changed, especially
> since it hasn't been released yet.  My concern is that calling it a pthread lock
> makes it sound like we are just locking threads.
> 

I think calling it proc_pthread under 1.3 is pretty silly actually.
Unless we call all the others proc_flock, etc.. We're talking about
how we mutex the accept, and we're using pthread locking for that.
When we say 'sysvsem' we're not lockings semaphores :)
-- 
===
   Jim Jagielski   [|]   [EMAIL PROTECTED]   [|]   http://www.jaguNET.com/
  "A society that will trade a little liberty for a little order
   will lose both and deserve neither"



Re: Q1: Rollup Release Format - Score So Far...

2001-09-20 Thread Alex Stewart

Graham Leggett wrote:

> Alex Stewart wrote:
>>There seems to be a big assumption here that "release" is the same as
>>"version", which seems like an unnecessary restriction.
>>
>>Frankly, if these are separate subprojects we're talking about (which it
>>seems pretty clear they're going to be evolving into, if they aren't
>>already), they should have separate, independent versioning.
>>
> 
> But consensus has just been reached that there will be a single rollup
> release, so out of necessity there will have to be one version per
> release.


Why?  It's the necessity for the one-to-one mapping between versions and 
releases that I'm questioning.  I don't see the requirement.  My point 
is that there is (and should be) a difference between _release_ 
numbering and _version_ numbering.  The same version of a module may go 
in multiple releases (if nothing's changed in that particular bit), so 
why change the module's version number just because it's being packaged 
again?  Likewise, why restrict module versioning such that its version 
can't change unless there's another rollup (or worse yet, its version 
can't change unless there's a new httpd released)?


>>Trying to
>>coordinate the version numbers of umpteen different projects just
>>because one of their possible distribution channels distributes them
>>together is silly and a lot of unnecessary extra work.
>>
> 
> We are currently coordinating three different projects (httpd-core, apr,
> apr-util) being released together and things are working fine. I don't
> see how expanding this to 4 or 5 is such a problem?


Well, your previous message demonstrated one reason:  It requires a lot 
more coordination (the "enormous trumpet call") to make sure things are 
consistent at rollup time, and there's no advantage (that I see) gained 
from it.  It also doesn't scale well at all.  (As somebody who's 
designed and administrated a few different large-scale CVS-based 
software release systems, I'm speaking from personal experience on that 
bit.)

In the short term, we may not be scaling to more than 4 or 5 projects, 
but I don't see why we should deliberately limit ourselves to that 
either, particularly since there's the potential for splitting this 
whole thing out into quite a few more groups (or bringing more things 
into the fold) later on if people decide it's worth it.

>>I agree with the global tagging thing, but I don't see why this much
>>effort has to be put into making everything ready concurrently just so
>>it can be rolled together.  Automatic coordination of this sort of thing
>>is part of what CVS (and in particular CVS tags) is supposed to be good for.
>>
> 
> "Making everything ready" just means "make sure it's not currently
> broken". This is exactly how we do things now, I don't think anything
> should change.


Except that you're going to get multiple semi-independent groups working 
on multiple internal timelines and all of a sudden you have to hold off 
the release of module A because module B's got a big problem that'll 
take a few days to fix, then by the time module B is fixed, module C has 
a problem, and when everything finally gets straightened out, something 
you could have gotten out the door in an hour has taken a week and a half.

>>It seems to me that each subproject should attempt to maintain at all
>>times a tag that says "current rollup-candidate", which isn't
>>necessarily the latest-and-greatest, but is the latest version that's
>>stable and without showstoppers.


[Actually, I should have said "it's a _recent_ version that's stable and 
without showstoppers".]

> I suggested this a while back - but after thinking about it some more I
> realised this just means extra work. Instead of tagging it once when the
> trumpet call is released, we must now update the latest-known-working
> tag every time we make a commit - yuck.


Umm, no.  All it means is that each group maintains its own release 
schedule, and updates its "releasable" tag appropriately for their 
schedule.  This doesn't have to be every commit, it could be every day, 
or every week, or whenever somebody feels like it (and it _can_ be that 
flexible, because each group doesn't have to drop everything and 
coordinate with everybody each time somebody wants to update things).

-alex




Re: Q1: Rollup Release Format - Score So Far...

2001-09-20 Thread Graham Leggett

Rodent of Unusual Size wrote:

> > But consensus has just been reached that there will be a
> > single rollup release, so out of necessity there will
> > have to be one version per release.
> 
> That is a consensus that was built quite quickly, so it
> is certainly non-binding if new data suggest it is not
> the best alternative.

This is true - but if we can't start agreeing to and sticking with
certain basic decisions about how the release will be issued then the
rollup release is never going to happen. So many times this discussion
has been started, but then it fragments into many little "what if we do
it completely differently" discussions and we're right back to where we
started.

As a result of this, mod_proxy - which was finished and ready for
testing almost six months ago - is still not getting the testing it
needs. At this stage if it's too hard to get the rollup release going
now with v2.0 trying to get out there, then we should just put proxy
back (as agreed before) and try sort out the rollup for v2.1.

Regards,
Graham
-- 
-
[EMAIL PROTECTED]"There's a moon
over Bourbon Street
tonight..."
 S/MIME Cryptographic Signature


Re: Q1: Rollup Release Format - Score So Far...

2001-09-20 Thread Rodent of Unusual Size

Graham Leggett wrote:
> 
> But consensus has just been reached that there will be a
> single rollup release, so out of necessity there will
> have to be one version per release.

That is a consensus that was built quite quickly, so it
is certainly non-binding if new data suggest it is not
the best alternative.
-- 
#kenP-)}

Ken Coar, Sanagendamgagwedweinini  http://Golux.Com/coar/
Author, developer, opinionist  http://Apache-Server.Com/

"All right everyone!  Step away from the glowing hamburger!"



apxs

2001-09-20 Thread Shrinivas Samant

Hi,
I am using the apache 2.0.23's apxs tool to build my module. I have to link
my module to third-party shared-library (libvsapi.so) which i did using
the -L & -l option. The mod_vs.so was built, but failed when i did a make?
The make result is attached below.

I think the resulting mod_vs.so file does not have the additional linked
libraries.

I used:

./apxs -i -a -c mod_vs.c mod_vs.h tm_service.c tm_service.h tmvs.h tmvsdef.h
tmv
sx.h m_linux.h -L . -l vsapi (also tried -l libvsapi.so)

Any help is appreciated?
-Shrini


make[1]: Entering directory `/usr/local/src/httpd-2_0_23'
/bin/sh
/usr/local/src/httpd-2_0_23/srclib/apr/libtool --silent --mode=compile
cc  -g -O2 -pthrea
d-DLINUX=2 -D_REENTRANT -D_XOPEN_SOURCE=500 -D_BSD_SOURCE -D_SVID_SOURCE
 -DAP_HAVE_DESIGNATED_
INITIALIZER   -I. -I/usr/local/src/httpd-2_0_23/os/unix -I/usr/local/src/htt
pd-2_0_23/server/mpm/p
refork -I/usr/local/src/httpd-2_0_23/modules/http -I/usr/local/src/httpd-2_0
_23/include -I/usr/loc
al/src/httpd-2_0_23/srclib/apr/include -I/usr/local/src/httpd-2_0_23/srclib/
apr-util/include -I/us
r/local/src/httpd-2_0_23/modules/dav/main -c modules.c && touch modules.lo
/bin/sh /usr/local/src/httpd-2_0_23/srclib/apr/libtool --silent --mode=link
gcc  -g -O2 -pthread
  -DLINUX=2 -D_REENTRANT -D_XOPEN_SOURCE=500 -D_BSD_SOURCE -D_SVID_SOURCE -D
AP_HAVE_DESIGNATED_INI
TIALIZER   -I. -I/usr/local/src/httpd-2_0_23/os/unix -I/usr/local/src/httpd-
2_0_23/server/mpm/pref
ork -I/usr/local/src/httpd-2_0_23/modules/http -I/usr/local/src/httpd-2_0_23
/include -I/usr/local/
src/httpd-2_0_23/srclib/apr/include -I/usr/local/src/httpd-2_0_23/srclib/apr
-util/include -I/usr/l
ocal/src/httpd-2_0_23/modules/dav/main -export-dynamic-o httpd
modules.lo   modules/aaa/mod_a
ccess.la modules/aaa/mod_auth.la modules/filters/mod_include.la
modules/loggers/mod_log_config.la
modules/metadata/mod_env.la modules/metadata/mod_setenvif.la
modules/http/mod_http.la modules/http
/mod_mime.la modules/vs/mod_vs.la modules/generators/mod_status.la
modules/generators/mod_autoinde
x.la modules/generators/mod_asis.la modules/generators/mod_cgi.la
modules/mappers/mod_negotiation.
la modules/mappers/mod_dir.la modules/mappers/mod_imap.la
modules/mappers/mod_actions.la modules/m
appers/mod_userdir.la modules/mappers/mod_alias.la modules/mappers/mod_so.la
server/mpm/prefork/li
bprefork.la server/libmain.la os/unix/libos.la
/usr/local/src/httpd-2_0_23/srclib/pcre/libpcre.la
/usr/local/src/httpd-2_0_23/srclib/apr-util/libaprutil.la
/usr/local/src/httpd-2_0_23/srclib/apr/l
ibapr.la
/usr/local/src/httpd-2_0_23/srclib/apr/shmem/unix/mm/libmm.la -lnsl -lnsl -l
m -lcrypt -ln
sl -ldl -L/usr/lib -lexpat
/usr/local/apache2/modules/mod_vs.so: undefined reference to `VSSetLogFlag'
/usr/local/apache2/modules/mod_vs.so: undefined reference to `VSInit'
/usr/local/apache2/modules/mod_vs.so: undefined reference to
`VSSetLogFilePath'
collect2: ld returned 1 exit status
make[1]: *** [httpd] Error 1
make[1]: Leaving directory `/usr/local/src/httpd-2_0_23'
make: *** [all-recursive] Error 1

Shrinivas Samant
Bell Labs Innovations, Lucent Technologies
tel: 732-949-6533
mob: 732-693-7528
fax: 732-949-1922
[EMAIL PROTECTED]






Re: Q1: Rollup Release Format - Score So Far...

2001-09-20 Thread Graham Leggett

Alex Stewart wrote:

> There seems to be a big assumption here that "release" is the same as
> "version", which seems like an unnecessary restriction.
> 
> Frankly, if these are separate subprojects we're talking about (which it
> seems pretty clear they're going to be evolving into, if they aren't
> already), they should have separate, independent versioning.

But consensus has just been reached that there will be a single rollup
release, so out of necessity there will have to be one version per
release.

> Trying to
> coordinate the version numbers of umpteen different projects just
> because one of their possible distribution channels distributes them
> together is silly and a lot of unnecessary extra work.

We are currently coordinating three different projects (httpd-core, apr,
apr-util) being released together and things are working fine. I don't
see how expanding this to 4 or 5 is such a problem?

> I agree with the global tagging thing, but I don't see why this much
> effort has to be put into making everything ready concurrently just so
> it can be rolled together.  Automatic coordination of this sort of thing
> is part of what CVS (and in particular CVS tags) is supposed to be good for.

"Making everything ready" just means "make sure it's not currently
broken". This is exactly how we do things now, I don't think anything
should change.

> It seems to me that each subproject should attempt to maintain at all
> times a tag that says "current rollup-candidate", which isn't
> necessarily the latest-and-greatest, but is the latest version that's
> stable and without showstoppers.

I suggested this a while back - but after thinking about it some more I
realised this just means extra work. Instead of tagging it once when the
trumpet call is released, we must now update the latest-known-working
tag every time we make a commit - yuck.

Regards,
Graham
-- 
-
[EMAIL PROTECTED]"There's a moon
over Bourbon Street
tonight..."
 S/MIME Cryptographic Signature


Re: cvs commit: httpd-proxy/module-2.0 CHANGES mod_proxy.c mod_proxy.h proxy_http.c

2001-09-20 Thread Graham Leggett

[EMAIL PROTECTED] wrote:

>   Added New Option 'HTTPProxyOverrideReturnedErrors' which lets the server override
>   the error pages returned from the proxied server and replace them with the standard
>   server error handling on the main server.

I don't like the name of the option - it should start with Proxy* in
order to be consistent with the other options.

Something like "ProxyErrorOverride"?

Is there a reason this option doesn't also work with FTP? (Admittedly I
haven't looked at the ftp code for a while, I'm not sure if it would
make sense, but for consistency I think it should work for both).

Regards,
Graham
-- 
-
[EMAIL PROTECTED]"There's a moon
over Bourbon Street
tonight..."
 S/MIME Cryptographic Signature


Re: Q1: Rollup Release Format - Score So Far...

2001-09-20 Thread Alex Stewart

Graham Leggett wrote:

> mod_foo wants to make a release, so they release v2.0.45.1 of the rollup
> tree, containing 2.0.45 of core and 2.0.45.1 of mod_foo. But what about
> mod_bar and the other modules? Will their tags need to be bumped up to

> 2.0.45.1 also? I would imagine they would, which is a problem.


There seems to be a big assumption here that "release" is the same as 
"version", which seems like an unnecessary restriction.

Frankly, if these are separate subprojects we're talking about (which it 
seems pretty clear they're going to be evolving into, if they aren't 
already), they should have separate, independent versioning.  Trying to 
coordinate the version numbers of umpteen different projects just 
because one of their possible distribution channels distributes them 
together is silly and a lot of unnecessary extra work.  At the same 
time, saying that we can't have a specific bundle release number because 
all the contents have different versions is equally silly.  The bundle 
release number reflects the number of the bundle, not necessarily the 
version of any of the contents.

Well, ok, it makes sense that the rollup bundle of "httpd and friends" 
should reflect the version number of the httpd core that's in it (that's 
the one version number that most people on the outside would probably 
expect to be consistent).  It also makes sense that there may be 
incremental bundles released between httpd version changes, so a 
sub-release identifier of some sort is needed.  The number on the 
bundle, however, in no way has to have any relationship to any of the 
extra module version numbers:

For example, apache-httpd-complete-2.0.1-12.tar.gz might contain:
   httpd version 2.0.1 (obviously)
   mod_foo version 2.0.1
   mod_bar version 1.7
   mod_baz version 18.7.3

apache-httpd-complete-2.0.1-13.tar.gz could contain exactly the same 
thing, except mod_bar is now at version 1.8, or whatever.

Now, admittedly, you could do the same thing with a date stamp instead 
of a revision number, but for these purposes "12" works just as well as 
"20020423", and is arguably more readable/usable (for one thing, you can 
tell that "13" is the next release after "12", but who knows what 
"20020611" is).  Anyway, the filenames we're looking at using are 
getting long enough already, IMO.


> Ideally the rollup release should commence with an enormous trumpet
> call, followed by the tagging of *all* the modules (including core) with
> the same tag. At this point *all* modules (including core) have to fix
> any showstoppers, and a release follows shortly afterwards to testers.
> If the release works, woohoo - it's a release. If not, oh well, it's an
> alpha.


I agree with the global tagging thing, but I don't see why this much 
effort has to be put into making everything ready concurrently just so 
it can be rolled together.  Automatic coordination of this sort of thing 
is part of what CVS (and in particular CVS tags) is supposed to be good for.

It seems to me that each subproject should attempt to maintain at all 
times a tag that says "current rollup-candidate", which isn't 
necessarily the latest-and-greatest, but is the latest version that's 
stable and without showstoppers.  At any point in time (any day of any 
week) and with no special warning, somebody should ideally be able to 
pull from all the appropriate CVS sources using that tag and get 
something that's appropriate to be made into a rollup tarball.  When a 
subproject has an update worthy of a new rollup, they tag it with that 
tag in their tree, and ask whoever's in charge of rolling releases to do 
another run.  At that point, a general notice might go out just so that 
everyone can do a quick double-check that what's tagged in their 
repositories is the stuff they really want going out, and then it gets 
pulled and rolled.  No big fanfare or mad scrambling needed, though.

Anyway, that's my $.02..

-alex




[Fwd: Re: Is building Apache 1.3.20 with Solaris CC 6.0 or 5.0 possible?]

2001-09-20 Thread Justin Erenkrantz

This is a weird one.  See my reply below for my thoughts.

In short, the answer is no because forte is screaming about the double
declaration of mutex - which seems to be a valid error.  The mutex
in include/multithread.h really needs to be namespace-protected.

Original message here:

http://groups.google.com/groups?hl=en&group=comp.infosystems.www.servers.unix&selm=d1efd44f.0109190840.24ca4739%40posting.google.com

-- justin

--- [EMAIL PROTECTED] wrote:
> From: [EMAIL PROTECTED]
> Date: Thu, 20 Sep 2001 00:52:14 -0700
> Reply-to: [EMAIL PROTECTED]
> To: [EMAIL PROTECTED]
> Subject: Re: Is building Apache 1.3.20 with Solaris
> CC 6.0 or 5.0 possible?
> 
> From: [EMAIL PROTECTED] (Justin Erenkrantz)
> Newsgroups: comp.infosystems.www.servers.unix
> Subject: Re: Is building Apache 1.3.20 with Solaris
> CC 6.0 or 5.0 possible?
> References:
> <[EMAIL PROTECTED]>
> NNTP-Posting-Host: 24.13.179.162
> Message-ID:
> <[EMAIL PROTECTED]>
> 
> [EMAIL PROTECTED] (Nick Lindridge) wrote in
> message
>
news:<[EMAIL PROTECTED]>...
> > Hi,
> > 
> > The answer is of of course yes, but has anyone actually built Apache
> > with Forte or CC 5 recently for Solaris 7 or 8?  Trying regular CC,
> > compat 5 and compat 4 all give up for the same reasons, and I wondered
> > if there are any config options that I've missed to get past the
> > obvious problems. An example build gets a little way and then the
> > output below.
> > 
> > Not sure which is going to be the least pain at this point -
> > installing gcc or fixing up the includes.
> 
> Wow.  It's broken.  I'll take a look at it in a few days. 
> In the meantime, I'd suggest gcc.  sunfreeware.com has 
> pre-built binaries you can download.
> 
> Odd that we haven't caught this before...
> 
> /usr/include/sys/mutex.h is getting included which defines a
> structure called mutex.  I wonder why gcc isn't complaining
> about it.  I wonder if it defines _ASM.  For the complete
> path, sys/mutex.h is included from sys/t_lock.h which is 
> included from sys/file.h which is included from ap_config.h.
> The only way to work around this might be to define _ASM
> before ap_config.h includes sys/file.h.  That's a hack
> though.
> 
> Otherwise, it looks like we may need to go on a type-rename 
> hunt in Apache 1.3.  This won't be fixed until 1.3.21 (at the 
> very least).
> 
> I'm going to CC this to [EMAIL PROTECTED]  Feel free to
> keep an eye on the progress there.
> 
> Justin Erenkrantz

- End forwarded message -




Re: cvs commit: httpd-2.0/server/mpm/worker worker.c

2001-09-20 Thread Justin Erenkrantz

On Thu, Sep 20, 2001 at 01:00:09AM -0700, Greg Stein wrote:
> >...
> > Whoever does the software behind apache-mbox (I take it this is 
> > mod_mbox?) might want to take note that it's spitting out invalid URLs..
> 
> The URLs produced by mod_mbox are fine. Aaron must have posted an unescaped
> version of the URL.

I have a feeling Aaron manually generated the URL.  -- justin




Re: cvs commit: httpd-2.0/server/mpm/worker worker.c

2001-09-20 Thread Greg Stein

On Thu, Sep 20, 2001 at 12:53:39AM -0700, Alex Stewart wrote:
> On a largely unrelated note, but something I found a little ironic given 
> the nature of this list:
> 
> Aaron Bannert wrote:
> 
> > 
>http://www.apachelabs.org/apache-mbox/199902.mbox/<[EMAIL PROTECTED]>
> 
> Please note that the above is not a valid URL.  Specifically, the "<"

Agreed.

>...
> Whoever does the software behind apache-mbox (I take it this is 
> mod_mbox?) might want to take note that it's spitting out invalid URLs..

The URLs produced by mod_mbox are fine. Aaron must have posted an unescaped
version of the URL.

(go to a mod_mbox page and view the source...)

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/



Re: cvs commit: httpd-2.0/server/mpm/worker worker.c

2001-09-20 Thread Alex Stewart

On a largely unrelated note, but something I found a little ironic given 
the nature of this list:

Aaron Bannert wrote:

> http://www.apachelabs.org/apache-mbox/199902.mbox/<[EMAIL PROTECTED]>


Please note that the above is not a valid URL.  Specifically, the "<" 
and ">" characters are technically not allowed in URLs and must be 
escaped.  (I bring this up partly because in Mozilla, this message came 
up with two links, a http: link to 199902.mbox, and a mailto: link to 
[EMAIL PROTECTED], so I had to do some cutting and 
pasting to actually see the right document)

Whoever does the software behind apache-mbox (I take it this is 
mod_mbox?) might want to take note that it's spitting out invalid URLs..

-alex