race conditions

2001-11-29 Thread victork

Hi all,


i am running apache 1.3.20 on linux suse 6.4, and as i was reading the
documentations from the apache.org web site, i noticed that it mentioned
there exists several race conditions.  however, it only says that it relates
to the restart signals (SIGHUP/SIGUSR1) and the die signal (SIGTERM), it did
not however specify how and where is the race condition.  looking into
apache 1.3.20's source code, one place i found that might have race
condition is in the ap_unblock_alarms function (pasted below) after
--alarms_blocked statement and before the ++alarms_blocked statement.  the
race condition comes in if exit_after_unblock is true, and a SIGALRM signal
comes in in that interval, this will bring us to the timeout function
(though we are suppose to exit).   however in that case, the server would
still exit normally in timeout(int sig), so i don't really see the problem
here.  another suspicious place is in read_request_line, a request could be
read from getline function, and before we can disable the SIGUSR1 signal, a
graceful restart signal (SIGUSR1) comes in, therefore we kill the child
process even though we got a request to process.  anyway, my question is,
did i correctly interpreted the race conditions (in ap_unblock_alarms and
read_request_line) and what other race conditions were the documentions
referring to?


API_EXPORT(void) ap_unblock_alarms(void)
{
--alarms_blocked;
if (alarms_blocked == 0) 
{
if (exit_after_unblock) 
{
/* We have a couple race conditions to deal with here, we can't
 * allow a timeout that comes in this small interval to allow
 * the child to jump back to the main loop.  Instead we block
 * alarms again, and then note that exit_after_unblock is
 * being dealt with.  We choose this way to solve this so that
 * the common path through unblock_alarms() is really short.
 */
 ++alarms_blocked;
 exit_after_unblock = 0;
 clean_child_exit(0);
}
if (alarm_pending) 
{
alarm_pending = 0;
timeout(0);
}
}
}

In addition, I am wondering about the following:

why the check on exit_after_unblock doesn't appear in lingerout as well (as
in timeout function) since a similar race condition (mentioned above in
ap_unblock_alarms) exists in lingerout too?  is it because current_conn is
never NULL once the alarms are enabled???  if so, why do we bother to check
current_conn in both the timeout function and lingerout function?  seems to
me that current_conn is always non null whenever alarms are enabled.  

when would you want to use ap_block_alarms?  Is it to make sure that the
code which deals with the memory pools does not get interrupted by any
signals??  I am concerned about this because I am trying to write a similar
server (but of course alot more simpler), and I want to know whether I would
need such blockings.  


Thanks in advance,

Victor





Re: CL for Proxy Requests

2001-11-29 Thread Ryan Bloom

On Thursday 29 November 2001 08:01 pm, Eli Marmor wrote:
> Content-Length is not passed through proxy requests, when Apache 2.0 is
> used as the proxy.
>
> Is it a bug?
> Feature?
> Limitation?
>
> Or is it just me?  My configuration?
>
> Many clients depend on this data, for example audio/video players, so
> it is quite bad to lack CL.
>
> Is there any way to tell the API that the filters don't change the
> response size so the original CL can be used?

There is no way to do that, because you will never know if filters changed the
data or not.  The reason we don't return a C-L, is that we don't have all of the
data, so we can't computer the C-L.  There is a possibility that we could fix this,
with a hack. Basically, have the C-L filter check to see if the only bucket is a 
socket bucket or a pipe bucket.  If so, leave the C-L alone.  We can be
sure that the data hasn't been changed in that case.  If the only bucket is
any other type, we will automagically compute the C-L.

Ryan


__
Ryan Bloom  [EMAIL PROTECTED]
Covalent Technologies   [EMAIL PROTECTED]
--



CL for Proxy Requests

2001-11-29 Thread Eli Marmor

Content-Length is not passed through proxy requests, when Apache 2.0 is
used as the proxy.

Is it a bug?
Feature?
Limitation?

Or is it just me?  My configuration?

Many clients depend on this data, for example audio/video players, so
it is quite bad to lack CL.

Is there any way to tell the API that the filters don't change the
response size so the original CL can be used?

-- 
Eli Marmor
[EMAIL PROTECTED]
CTO, Founder
Netmask (El-Mar) Internet Technologies Ltd.
__
Tel.:   +972-9-766-1020  8 Yad-Harutzim St.
Fax.:   +972-9-766-1314  P.O.B. 7004
Mobile: +972-50-23-7338  Kfar-Saba 44641, Israel



Re: Mod_cgi doesn't seem to write stderr to the error_log

2001-11-29 Thread Aaron Bannert

On Thu, Nov 29, 2001 at 05:18:00PM -0800, Ryan Bloom wrote:
> 
> I haven't had time to verify it myself, but I have been told that it is happening.
> Mod_cgi is not actually writing error message from the script to the error_log.
> I would consider this a major showstopper!

It's working fine for me with both mod_cgi and mod_cgid. The only
difference is that mod_cgid isn't prefixing any log metadata:

prefork+mod_cgi:
  [Thu Nov 29 18:00:41 2001] [error] [client 10.250.1.5] foo

worker+mod_cgid:
  foo

Same thing with and without suexec enabled.

-aaron




Re: Mod_cgi doesn't seem to write stderr to the error_log

2001-11-29 Thread William A. Rowe, Jr.

From: "Ryan Bloom" <[EMAIL PROTECTED]>
Sent: Thursday, November 29, 2001 7:18 PM


> I haven't had time to verify it myself, but I have been told that it is happening.
> Mod_cgi is not actually writing error message from the script to the error_log.
> I would consider this a major showstopper!

Definately proven to myself that it's _working_ on HEAD.

Perhaps in conjuction with suexec, specifically?  Or something else I can't
verify, like mod_cgid.

Bill




Re: worker mpm: can we optimize away the listener thread?

2001-11-29 Thread Ian Holsman

Brian Pane wrote:

>  From a performance perspective, the two limitations that I see in
> the current worker implementation are:
>  * We're basically guaranteed to have to do an extra context switch on
>each connection, in order to pass the connection from the listener
>thread to a worker thread.
>  * The passing of a pool from listener to worker makes it very tricky
>to optimize away all the mutexes within the pools.


Brian this sounds similiar to the 'Leader/Follower' Pattern in the ACE framework
(http://jerry.cs.uiuc.edu/~plop/plop2k/proceedings/ORyan/ORyan.pdf)




> 
> So...please forgive me if this has already been considered and dismissed
> a long time ago, but...why can't the listener and worker be the same 
> thread?
> 
> I'm thinking of a design like this:
> 
>  * There's no dedicated listener thread.
> 
>  * Each worker thread does this:
>  while (!time to shut down) {
>wait for another worker to wake me up;
>if (in shutdown) {
>  exit this thread;
>}
>accept on listen mutex or pipe of death;
>if (pipe of death triggered) {
>  set "in shutdown" flag;
>  wake up all the other workers;
>  exit this thread;
>}
>else {
>  pick another worker and wake it up;
>  handle the connection that I just accepted;
>}
>  }
> 
> --Brian
> 






Mod_cgi doesn't seem to write stderr to the error_log

2001-11-29 Thread Ryan Bloom


I haven't had time to verify it myself, but I have been told that it is happening.
Mod_cgi is not actually writing error message from the script to the error_log.
I would consider this a major showstopper!

Ryan

__
Ryan Bloom  [EMAIL PROTECTED]
Covalent Technologies   [EMAIL PROTECTED]
--



mod_rewrite and location directives.

2001-11-29 Thread Ian Holsman

hi.

I was just wondering if anyone knows why rewrite won't allow a subrequest
to work on a directory rewrite rule.

It's looks like the code has been in there forever...
here's the code fragment I'm talking about.


static int hook_fixup(request_rec *r)
{
 rewrite_perdir_conf *dconf;
 char *cp;
 char *cp2;
 const char *ccp;
 char *prefix;
 int l;
 int rulestatus;
 int n;
 char *ofilename;

 dconf = (rewrite_perdir_conf *)ap_get_module_config(r->per_dir_config,
 &rewrite_module);

 /* if there is no per-dir config we return immediately */
 if (dconf == NULL) {
 return DECLINED;
 }

 /* we shouldn't do anything in subrequests */
 if (r->main != NULL) {
 return DECLINED;
 }





Re: Request for Patch to 1.3.x

2001-11-29 Thread Bill Stoddard

It's kinda crufty, but so are a lot of other things in 1.3. It is a small patch which 
is
goodness and I appreciate what it is used for.

If it is useful enough for you to be still interested in it after a month, I'll add my 
+1
to Gregs :-)

+1

Bill

- Original Message -
From: "Kevin Mallory" <[EMAIL PROTECTED]>
To: "'Greg Stein'" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Thursday, November 29, 2001 4:35 PM
Subject: RE: Request for Patch to 1.3.x


> Does anyone have any objections to adding this capability??
>
> -Original Message-
> From: Greg Stein [mailto:[EMAIL PROTECTED]]
> Sent: Sunday, October 28, 2001 10:28 PM
> To: [EMAIL PROTECTED]
> Cc: Kevin Mallory
> Subject: Re: Request for Patch to 1.3.x
>
>
> On Wed, Oct 03, 2001 at 11:19:34AM -0700, Kevin Mallory wrote:
> >...
>
> [ patch allows custom caching mechanisms ]
>
> >...
> > The patch simply adds a new callback (the 'filter callback') into the
>
> >handling in buff.c's routine writev_it_all() and buff_write().
> >
> > When not registered, there is no performance impact to users. This
> > filter callback makes it possible for SpiderCache to correctly
> > intercept the request as it is being processed, thus allowing our
> > product to perform dynamic page caching.
>
> +1 on including this patch.
>
> I see no bad effects, and it has definite utility.
>
> Cheers,
> -g
>
>
> >...
> > *** orig_buff.c Tue Aug 21 17:45:34 2001
> > --- spidercache_buff.c Tue Aug 21 17:45:35 2001
> > ***
> > *** 356,361 
> > --- 356,365 
> >   {
> >   int rv;
> >
> > + if (fb->filter_callback != NULL) {
> > + fb->filter_callback(fb, buf, nbyte);
> > + }
> > +
> >   #if defined(WIN32) || defined(NETWARE)
> >   if (fb->flags & B_SOCKET) {
> >   rv = sendwithtimeout(fb->fd, buf, nbyte, 0);
> > ***
> > *** 438,443 
> > --- 442,450 
> >  (size_t) SF_UNBOUND, 1, SF_WRITE);
> >   #endif
> >
> > + fb->callback_data = NULL;
> > + fb->filter_callback = NULL;
> > +
> >   return fb;
> >   }
> >
> > ***
> > *** 1077,1082 
> > --- 1084,1095 
> >   static int writev_it_all(BUFF *fb, struct iovec *vec, int nvec)
> >   {
> >   int i, rv;
> > +
> > + if (fb->filter_callback != NULL) {
> > + for (i = 0; i < nvec; i++) {
> > + fb->filter_callback(fb, vec[i].iov_base,
> vec[i].iov_len);
> > + }
> > + }
> >
> >   /* while it's nice an easy to build the vector and crud, it's
> painful
> >* to deal with a partial writev()
>
>
> > *** orig_buff.h Tue Aug 21 17:45:34 2001
> > --- spidercache_buff.h Tue Aug 21 17:45:35 2001
> > ***
> > *** 129,134 
> > --- 129,138 
> >   Sfio_t *sf_in;
> >   Sfio_t *sf_out;
> >   #endif
> > +
> > + void *callback_data;
> > + void (*filter_callback)(BUFF *, const void *, int );
> > +
> >   };
> >
> >   #ifdef B_SFIO
>
>
> --
> Greg Stein, http://www.lyra.org/
>




RE: Request for Patch to 1.3.x

2001-11-29 Thread Kevin Mallory

Does anyone have any objections to adding this capability??

-Original Message-
From: Greg Stein [mailto:[EMAIL PROTECTED]] 
Sent: Sunday, October 28, 2001 10:28 PM
To: [EMAIL PROTECTED]
Cc: Kevin Mallory
Subject: Re: Request for Patch to 1.3.x


On Wed, Oct 03, 2001 at 11:19:34AM -0700, Kevin Mallory wrote:
>...

[ patch allows custom caching mechanisms ]

>...
> The patch simply adds a new callback (the 'filter callback') into the

>handling in buff.c's routine writev_it_all() and buff_write().
>  
> When not registered, there is no performance impact to users. This 
> filter callback makes it possible for SpiderCache to correctly 
> intercept the request as it is being processed, thus allowing our 
> product to perform dynamic page caching.

+1 on including this patch.

I see no bad effects, and it has definite utility.

Cheers,
-g


>...
> *** orig_buff.c   Tue Aug 21 17:45:34 2001
> --- spidercache_buff.cTue Aug 21 17:45:35 2001
> ***
> *** 356,361 
> --- 356,365 
>   {
>   int rv;
>   
> + if (fb->filter_callback != NULL) {
> + fb->filter_callback(fb, buf, nbyte);
> + }
> +
>   #if defined(WIN32) || defined(NETWARE)
>   if (fb->flags & B_SOCKET) {
>   rv = sendwithtimeout(fb->fd, buf, nbyte, 0);
> ***
> *** 438,443 
> --- 442,450 
>  (size_t) SF_UNBOUND, 1, SF_WRITE);
>   #endif
>   
> + fb->callback_data = NULL;
> + fb->filter_callback = NULL;
> + 
>   return fb;
>   }
>   
> ***
> *** 1077,1082 
> --- 1084,1095 
>   static int writev_it_all(BUFF *fb, struct iovec *vec, int nvec)
>   {
>   int i, rv;
> + 
> + if (fb->filter_callback != NULL) {
> + for (i = 0; i < nvec; i++) {
> + fb->filter_callback(fb, vec[i].iov_base,
vec[i].iov_len);
> + }
> + }
>   
>   /* while it's nice an easy to build the vector and crud, it's
painful
>* to deal with a partial writev()


> *** orig_buff.h   Tue Aug 21 17:45:34 2001
> --- spidercache_buff.hTue Aug 21 17:45:35 2001
> ***
> *** 129,134 
> --- 129,138 
>   Sfio_t *sf_in;
>   Sfio_t *sf_out;
>   #endif
> + 
> + void *callback_data;
> + void (*filter_callback)(BUFF *, const void *, int );
> + 
>   };
>   
>   #ifdef B_SFIO


-- 
Greg Stein, http://www.lyra.org/




Re: support for multiple tcp/udp ports

2001-11-29 Thread Ryan Bloom

On Thursday 29 November 2001 12:12 pm, Michal Szymaniak wrote:
> Hello again.
>
> > It is now possible to write a module that will make Apache listen on
> > UDP ports.  However, as somebody who has done this in the past,
> > it's not a good idea.  You lose too much data on every request.
>
> Could you explain what you mean by saying 'you lose data'? Is it losing
> data because of lack of reliability in udp, or missing datagrams that
> arrive to your udp socket and are subsequently overwritten by next ones
> before you manage to service them?

I am assuming it was because of the reliability of udp, but we ran out of time
on the project before we figured out exactly what was happening, and I
never got back to it.

> Anyway, I have tried to modify the echo module to manage additonal
> sockets: I added post_config hook that created new sockets together
> with associated apr_listen_rec structures and then simply inserted them
> into the 'ap_listeners' list. As long as the sockets were tcp-oriented,
> everything was just fine. However, after switching from SOCK_STREAM to
> SOCK_DGRAM, apache exited with a critical error, leaving (in 'error_log')
> a few lines about 'invalid operations on non-tcp socket'.

You are adding the sockets too early.  There are two ways to handle this.

1)  Use the pre_mpm hook instead of post_config.
2)  We need a new hook

If you look at the worker MPM, you will see that it actually adds a pipe
to the listen_rec list, but it doesn't use a hook to do it.  Can you modify 
your code to use the pre_mpm hook, and let me know if that works?  Even
if it does, we may need a new hook, because the pre_mpm hook doesn't
get called for graceful restarts.

Ryan
__
Ryan Bloom  [EMAIL PROTECTED]
Covalent Technologies   [EMAIL PROTECTED]
--



Re: support for multiple tcp/udp ports

2001-11-29 Thread Michal Szymaniak



Hello again.

> It is now possible to write a module that will make Apache listen on
> UDP ports.  However, as somebody who has done this in the past,
> it's not a good idea.  You lose too much data on every request.

Could you explain what you mean by saying 'you lose data'? Is it losing
data because of lack of reliability in udp, or missing datagrams that
arrive to your udp socket and are subsequently overwritten by next ones
before you manage to service them?

Anyway, I have tried to modify the echo module to manage additonal
sockets: I added post_config hook that created new sockets together
with associated apr_listen_rec structures and then simply inserted them
into the 'ap_listeners' list. As long as the sockets were tcp-oriented,
everything was just fine. However, after switching from SOCK_STREAM to
SOCK_DGRAM, apache exited with a critical error, leaving (in 'error_log')
a few lines about 'invalid operations on non-tcp socket'.

I looked into apache source and it is clearly visible that all(?) the
routines assume the sockets from the 'ap_listeners' list to be tcp ones.
So how can I handle incoming udp datagrams _without_ modifying apache
source? BTW, the aim is not to run http over udp, but simply to support
certain stupid protocol using udp - and it will be good enough if my
module reacts for every incoming udp datagram with sending an appropriate
response in another udp datagram.

If it makes any difference - I play with apache 2.0.28.

Kind regards,
--
+  -- -- -  - -   
: Michal Szymaniak | mailto:[EMAIL PROTECTED]
.




Re: ISAPICacheFile - Crashes apache Please help

2001-11-29 Thread Ian Holsman

Ganesh Tirtur wrote:

> Hi,
> 
> I have written a ISAPI extension (which is basically a dll) which does some
> db query stuff. I want this extension to be loaded in the memory as long as
> apache is running. In new Apache release (2.0) this particular feature has
> been implemented, that's by using directive ISAPICacheFile. When I make a
> entry in the httpd.conf file and start the Apache the apache crashes. I read
> somewhere that apache tries to read httpd.conf file twice so eventually it
> tries to load dll twice and that is where it fails. So, I am wondering
> whether I am doing any stupid mistake or it's a known bug which is yet to be
> fixed.
> 
> I greatly appreciate any input/solutions for this problem.
> 
> 
> Thanks in advance,
> Greg
> 
> 

nup.
apache re-reads the config twice.
it's just the way it is for the moment.
check out how some of the other module handle it
(look at the post_config hook)





ISAPICacheFile - Crashes apache Please help

2001-11-29 Thread Ganesh Tirtur

Hi,

I have written a ISAPI extension (which is basically a dll) which does some
db query stuff. I want this extension to be loaded in the memory as long as
apache is running. In new Apache release (2.0) this particular feature has
been implemented, that's by using directive ISAPICacheFile. When I make a
entry in the httpd.conf file and start the Apache the apache crashes. I read
somewhere that apache tries to read httpd.conf file twice so eventually it
tries to load dll twice and that is where it fails. So, I am wondering
whether I am doing any stupid mistake or it's a known bug which is yet to be
fixed.

I greatly appreciate any input/solutions for this problem.


Thanks in advance,
Greg





Re: [STATUS] (httpd-2.0) Wed Nov 28 23:45:08 EST 2001

2001-11-29 Thread TOKILEY


In a message dated 11/29/2001 10:23:47 AM Pacific Standard Time, 
[EMAIL PROTECTED] writes:

> > Does the output of mod_deflate have a GZIP and/or ZLIB header on it, or 
not?
> 
>  > Even those 2 headers are NOT the same but that's yet another story.
>  
>  Correct.  deflate is the algorithm. deflate + gzip header is gzip.
>  The module should be capable of producing both, if not now then eventually.
>  
>  Roy

Then this is what I am saying...

Naming the module after the algorithm ( which isn't even
really the actual compression algorithm since it's all
just Liv-Zempel 77 prior to IBM/Sperry-Rand patent with
some Huffman algos laid in ) would not have been my
first choice for a name.

If the 'stated goal' of the module is ( as you seem to have
just said ) to be able to support just about anything that
can appear in an HTTP 'Accept-Encoding:' header then
I would have chosen something more to the point like
mod_encoding ( So it can do both Content-Encoding and
Transfer-Encoding ).

Hey... max nix to me.
I don't care it it's called 
mod_lz77_plus_huffman 
The name doesn't really matter.

I'm simply pleased as punch that SOMETHING is
finally committed to the Apache CVS tree which is
has the potential to 'finally' allow Apache to do parts
of the HTTP spec that are over 5 years old now.

It's a great step forward. It will 'evolve' and the 'extra' things
it needs like cross-header item inclusion/exclusion will be 
dicated by reality and no one individual but nothing could
happen until there is at least something there to 'patch'
which satisfies all the myriad Apache submission criteria.

Well done. It feels like progress.

Yours...
Kevin Kiley



Does Apache require child processes to die on a SIGHUP?

2001-11-29 Thread Eli Sand

I've got a piped logfile program I write to handle my logfiles, and
someone using it on Solaris said that when they try to restart apache,
it hangs on waiting for the piped program to terminate.  Last time I
checked Apache puts out a SIGHUP and then a SIGTERM to all child
processes.  The program calls exit() for a SIGTERM, and on Linux, seems
to exit and let Apache restart properly, however on Solaris, it seems
that it doesn't die properly and you have to send a SIGKILL to get it to
die and let Apache restart.

We were able to fix this problem by having the program call exit() when
it receives a SIGHUP too, but is this what Apache is expecting?  Should
it die on a SIGHUP?  If not, any idea why the child process would be
hanging on a Solaris system?

Eli.




Re: worker mpm: can we optimize away the listener thread?

2001-11-29 Thread Ryan Bloom

On Thursday 29 November 2001 11:12 am, Greg Ames wrote:
> Greg Ames wrote:
> > Non-graceful restarts in threaded had the same problem worker has today:
> > no way to blow away threads which are serving long-running requests.
>
> Actually, an Apache'r who wishes to remain anonymous had a novel idea
> for dealing with this:  close the long running worker's network socket
> from a different thread.  That ought to get its attention fairly
> quickly.

It's a graceful restart.  We don't stop connections on threads during a graceful
restart, no matter how long it has been running.  The only reason to stop the
connection is because a timeout pops.  If the timeout doesn't pop, then we
are successfully sending information.

The worker MPM handles this be starting a new child process, which
shares the same slot in the scoreboard.

Ryan

__
Ryan Bloom  [EMAIL PROTECTED]
Covalent Technologies   [EMAIL PROTECTED]
--



Re: worker mpm: can we optimize away the listener thread?

2001-11-29 Thread Greg Ames

Greg Ames wrote:

> Non-graceful restarts in threaded had the same problem worker has today:
> no way to blow away threads which are serving long-running requests.

Actually, an Apache'r who wishes to remain anonymous had a novel idea
for dealing with this:  close the long running worker's network socket
from a different thread.  That ought to get its attention fairly
quickly.

Greg



Re: worker mpm: can we optimize away the listener thread?

2001-11-29 Thread Greg Ames

Brian Pane wrote:

> Weren't the thread management problems with the threaded MPM
> related specifically to shutdown?  If it's just shutdown that's
> a problem, it may be possible to solve it.

The basic problem was the global accept mutex.  Once it was time for a
process to die, we didn't have a reliable way to wake up all the threads
in that process who were waiting on the mutex.  We would see this with
graceful restarts and after ap_process_idle_server_maintenance decided
it had too many workers.  If you looked at mod_status, you would see
processes with a few idle threads and often a "G" hanging around
indefinately.  But other than that, graceful restarts worked fine.

This problem didn't exist when we used separate intra-process and
cross-process accept mutexes, because once it's time to die, nobody
acquires the intra-process mutex.  But that requires an extra syscall on
every request.

With your design, we shouldn't have the problem where some threads in a
process are blocked in mutex land unaware that it's time to die, because
only one thread at a time is an accept thread.

Non-graceful restarts in threaded had the same problem worker has today:
no way to blow away threads which are serving long-running requests.

Greg



Re: [STATUS] (httpd-2.0) Wed Nov 28 23:45:08 EST 2001

2001-11-29 Thread Roy T. Fielding

> >  And the name change from mod_gz to mod_deflate was suggested
> >  by Roy, whom I think knows HTTP better than anyone else here..
> 
> Knowing HTTP is one thing... knowing compression formats is another.

Heh, that's amusing.

> Does the output of mod_deflate have a GZIP and/or ZLIB header on it, or not?
> Even those 2 headers are NOT the same but that's yet another story.

Correct.  deflate is the algorithm. deflate + gzip header is gzip.
The module should be capable of producing both, if not now then eventually.

Roy




Re: worker mpm: can we optimize away the listener thread?

2001-11-29 Thread Ryan Bloom

On Thursday 29 November 2001 10:05 am, Greg Ames wrote:
> Ryan Bloom wrote:
> > The model that Brian posted is exactly what we used to do with threaded,
> > if you had multiple ports.
>
> No, you're missing a key difference.  There's no intra-process mutex in
> Brian's MPM.
>
> One thread at a time is chosen to be the accept thread without using a
> mutex.  Once it picks off a new connection/pod byte, it chooses the next
> accept thread and wakes it up before getting bogged down in serving the
> request.

So we move from a mutex to a condition variable.  Okay, yeah, I did miss
that difference.  How big is the performance win for that though?  
Remembering that condition variables require a mutex for some operations.

Ryan

__
Ryan Bloom  [EMAIL PROTECTED]
Covalent Technologies   [EMAIL PROTECTED]
--



Re: [PATCH] apache-1.3/src/os/netware/os.c

2001-11-29 Thread Pavel Novy

Brad, I've tested my patch on NW5.1 SP3+ box with the latest (Beta)
patches installed (011022n5 => wsock4f, ...), not running other
versions/configurations of NetWare such as NW6, so I can't test it here.
I'm not absolutely sure if WSAIoctl(..., SO_SSL_GET_FLAGS,...) is 100%
okay. I have no documentation, but I am wondering that zero is returned
in lpcbBytesReturned (NULL pointer passed in my patch)... Anyway,
value returned by this call in lpvOutBuffer seems to be correct (at 
least on my server).

Additionally to this fix, I am pretty sure that we need more complex
ap_is_default_port()/ap_default_port() macro/function. I think that 
redirection from "https://my_site/location"; to
"https://my_site:443/location/"; is inaccurate (Apache on Linux with 
mod_ssl doesn't work this way). Port 443 should be considered as default 
for https scheme.

I am also experiencing another issue - if accessing server from our 
local domain and omitting a domain name in the URL:

"http(s)://my_server_without_a_domain_name/location" ->
"http(s)://my_server_domain_name/location/"

I can't understand why "Location" returned in redirection response (code
301) is not constructed from incoming URI (a trailing slash added), but
from (virtual) server's configuration parameters. This is not
NetWare specific behaviour and mod_dir (and so on) is responsible for 
this. However, I'm not too familiar with W3C specification...

Regards,
Pavel

Brad Nicholes wrote:

 > Pavel,
 > Your patch looks good.  It looks like a much cleaner solution.
 > What version of Winsock have you tested this patch with?  Did you try it
 > on NW6?  As soon as I get some time to implement it and test it myself,
 > I will get it checked in.
 >
 > thanks,
 > Brad
 >
 >
 >
 [EMAIL PROTECTED] Wednesday, November 28, 2001 7:50:57 AM >>>
 
 > Hi,
 > attached patch to fix invalid redirections from
 > "https://some_site/some_location"; to
 > "http://some_site/some_location/";.
 >
 > Pavel
 >
 >







Re: worker mpm: can we optimize away the listener thread?

2001-11-29 Thread Aaron Bannert

On Thu, Nov 29, 2001 at 09:59:10AM -0800, Ryan Bloom wrote:
> On Thursday 29 November 2001 09:45 am, Brian Pane wrote:
> > Right--the fact that the transaction pools are children of a pool
> > owned by the listener thread means that we have to do locking when
> > we destroy a transaction pool (to avoid race conditions when unregistering
> > it from its parent).
> 
> I'm not sure this will fix the problem, but can't we fix this with one more level
> of indirection?
> 
> Basically, the listener thread would have a pool of pools, one per thread.
> Instead of creating the transaction pool off of the man listener pool, it creates
> the pool of pools off the listener pool, and the transaction pool off the next
> pool in the pool of pools.
> 
>   listener
> 
> sub-pool1 sub-pool2   sub-pool3
> 
> trans-pooltrans-pool  trans-pool
> 
> The sub-pools are long-lived, as long as the listener pool. By moving the
> trans-pool to under the sub-pool, and only allowing one trans-pool off
> each sub-pool at a time, we remove the race-condition, and can remove
> the lock.

That also opens us up for the possibility of pool reuse as well as
cache hit performance. We implement the "pool of pools" as a simple
mutex-protected stack with a set number of pools equal to the number of
threads (or maybe one extra). Then we make sure that the worker clears
and returns the pool before reentering the worker queue. We don't get
the benefit of reusing the most recent thread, which would give us cache
hits on that thread's stack segment, but we do get hits on the pools,
and we even reduce overall memory consumption since the pools that grow
to meet the demand are reused (and we don't force all pools to grow to
meet the peak demand). Interesting.

-aaron



Re: worker mpm: can we optimize away the listener thread?

2001-11-29 Thread Greg Ames

Ryan Bloom wrote:

> The model that Brian posted is exactly what we used to do with threaded,
> if you had multiple ports.  

No, you're missing a key difference.  There's no intra-process mutex in
Brian's MPM.  

One thread at a time is chosen to be the accept thread without using a
mutex.  Once it picks off a new connection/pod byte, it chooses the next
accept thread and wakes it up before getting bogged down in serving the
request.

Greg



Re: worker mpm: can we optimize away the listener thread?

2001-11-29 Thread Ryan Bloom

On Thursday 29 November 2001 09:45 am, Brian Pane wrote:
> Aaron Bannert wrote:
> >On Thu, Nov 29, 2001 at 09:20:48AM -0800, Brian Pane wrote:
> >>From a performance perspective, the two limitations that I see in
> >>the current worker implementation are:
> >> * We're basically guaranteed to have to do an extra context switch on
> >>   each connection, in order to pass the connection from the listener
> >>   thread to a worker thread.
> >> * The passing of a pool from listener to worker makes it very tricky
> >>   to optimize away all the mutexes within the pools.
> >
> >IIRC, the problem isn't so much the fact that pools may be passed around,
> >since in that respect they are already threadsafe without mutexes
> >(at least in the current CVS and in the recent "time-space tradeoff"
> >patch. I believe the actual problem as you have described it to me is
> >how destroying a pool requires that the parent be locked. Perhaps you
> >can better characterize the problem.
>
> Right--the fact that the transaction pools are children of a pool
> owned by the listener thread means that we have to do locking when
> we destroy a transaction pool (to avoid race conditions when unregistering
> it from its parent).

I'm not sure this will fix the problem, but can't we fix this with one more level
of indirection?

Basically, the listener thread would have a pool of pools, one per thread.
Instead of creating the transaction pool off of the man listener pool, it creates
the pool of pools off the listener pool, and the transaction pool off the next
pool in the pool of pools.

  listener

sub-pool1   sub-pool2   sub-pool3

trans-pool  trans-pool  trans-pool

The sub-pools are long-lived, as long as the listener pool. By moving the
trans-pool to under the sub-pool, and only allowing one trans-pool off
each sub-pool at a time, we remove the race-condition, and can remove
the lock.

Ryan

__
Ryan Bloom  [EMAIL PROTECTED]
Covalent Technologies   [EMAIL PROTECTED]
--



Re: worker mpm: can we optimize away the listener thread?

2001-11-29 Thread Ryan Bloom

On Thursday 29 November 2001 09:41 am, Brian Pane wrote:
> Ryan Bloom wrote:
> >On Thursday 29 November 2001 09:20 am, Brian Pane wrote:
> >> From a performance perspective, the two limitations that I see in
> >>the current worker implementation are:
> >>  * We're basically guaranteed to have to do an extra context switch on
> >>each connection, in order to pass the connection from the listener
> >>thread to a worker thread.
> >>  * The passing of a pool from listener to worker makes it very tricky
> >>to optimize away all the mutexes within the pools.
> >>
> >>So...please forgive me if this has already been considered and dismissed
> >>a long time ago, but...why can't the listener and worker be the same
> >>thread?
> >
> >That's where we were before worker, with the threaded MPM.  There are
> >thread management issues with that model, and it doesn't scale as well.
>
> Weren't the thread management problems with the threaded MPM
> related specifically to shutdown?  If it's just shutdown that's
> a problem, it may be possible to solve it.

The problem is that without a master thread to manage the other threads,
things start to fall apart. shutdown, restart they both didn't really work well.

__
Ryan Bloom  [EMAIL PROTECTED]
Covalent Technologies   [EMAIL PROTECTED]
--



Re: worker mpm: can we optimize away the listener thread?

2001-11-29 Thread Ryan Bloom

On Thursday 29 November 2001 09:48 am, Aaron Bannert wrote:
> On Thu, Nov 29, 2001 at 09:31:01AM -0800, Ryan Bloom wrote:
> > On Thursday 29 November 2001 09:20 am, Brian Pane wrote:
> > > So...please forgive me if this has already been considered and
> > > dismissed a long time ago, but...why can't the listener and worker be
> > > the same thread?
> >
> > That's where we were before worker, with the threaded MPM.  There are
> > thread management issues with that model, and it doesn't scale as well.
>
> Not exactly, in Brian's model we still have the benefit of only having
> one thread per process in the accept loop at one time, which means
> significantly reduced overhead from lock contention (remember my posts
> a few months back about how terrible fcntl() gets when there are even
> more than just a few threads/processes contending for the lock?).
>
> Thread mangement (at shutdown) has always been a problem in our threaded
> MPMs. I'm still not completely comfortable with the current state of
> worker, but that has more to do with signals than threads.

The model that Brian posted is exactly what we used to do with threaded,
if you had multiple ports.  In fact, it was the very first implementation of
threaded, where we always did multiple locks.

Ryan

__
Ryan Bloom  [EMAIL PROTECTED]
Covalent Technologies   [EMAIL PROTECTED]
--



Re: worker mpm: can we optimize away the listener thread?

2001-11-29 Thread Aaron Bannert

On Thu, Nov 29, 2001 at 09:31:01AM -0800, Ryan Bloom wrote:
> On Thursday 29 November 2001 09:20 am, Brian Pane wrote:
> > So...please forgive me if this has already been considered and dismissed
> > a long time ago, but...why can't the listener and worker be the same
> > thread?
> 
> That's where we were before worker, with the threaded MPM.  There are
> thread management issues with that model, and it doesn't scale as well.

Not exactly, in Brian's model we still have the benefit of only having
one thread per process in the accept loop at one time, which means
significantly reduced overhead from lock contention (remember my posts
a few months back about how terrible fcntl() gets when there are even
more than just a few threads/processes contending for the lock?).

Thread mangement (at shutdown) has always been a problem in our threaded
MPMs. I'm still not completely comfortable with the current state of
worker, but that has more to do with signals than threads.

-aaron



Re: worker mpm: can we optimize away the listener thread?

2001-11-29 Thread Brian Pane

Aaron Bannert wrote:

>On Thu, Nov 29, 2001 at 09:20:48AM -0800, Brian Pane wrote:
>
>>From a performance perspective, the two limitations that I see in
>>the current worker implementation are:
>> * We're basically guaranteed to have to do an extra context switch on
>>   each connection, in order to pass the connection from the listener
>>   thread to a worker thread.
>> * The passing of a pool from listener to worker makes it very tricky
>>   to optimize away all the mutexes within the pools.
>>
>
>IIRC, the problem isn't so much the fact that pools may be passed around,
>since in that respect they are already threadsafe without mutexes
>(at least in the current CVS and in the recent "time-space tradeoff"
>patch. I believe the actual problem as you have described it to me is
>how destroying a pool requires that the parent be locked. Perhaps you
>can better characterize the problem.
>

Right--the fact that the transaction pools are children of a pool
owned by the listener thread means that we have to do locking when
we destroy a transaction pool (to avoid race conditions when unregistering
it from its parent).

--Brian






Re: worker mpm: can we optimize away the listener thread?

2001-11-29 Thread Cliff Woolley

On Thu, 29 Nov 2001, Brian Pane wrote:

> Weren't the thread management problems with the threaded MPM
> related specifically to shutdown?  If it's just shutdown that's
> a problem, it may be possible to solve it.

Graceful restart was the big problem.

--Cliff

--
   Cliff Woolley
   [EMAIL PROTECTED]
   Charlottesville, VA





Re: worker mpm: can we optimize away the listener thread?

2001-11-29 Thread Brian Pane

Ryan Bloom wrote:

>On Thursday 29 November 2001 09:20 am, Brian Pane wrote:
>
>> From a performance perspective, the two limitations that I see in
>>the current worker implementation are:
>>  * We're basically guaranteed to have to do an extra context switch on
>>each connection, in order to pass the connection from the listener
>>thread to a worker thread.
>>  * The passing of a pool from listener to worker makes it very tricky
>>to optimize away all the mutexes within the pools.
>>
>>So...please forgive me if this has already been considered and dismissed
>>a long time ago, but...why can't the listener and worker be the same
>>thread?
>>
>
>That's where we were before worker, with the threaded MPM.  There are
>thread management issues with that model, and it doesn't scale as well.
>

Weren't the thread management problems with the threaded MPM
related specifically to shutdown?  If it's just shutdown that's
a problem, it may be possible to solve it.

--Brian






Re: worker mpm: can we optimize away the listener thread?

2001-11-29 Thread Aaron Bannert

On Thu, Nov 29, 2001 at 09:20:48AM -0800, Brian Pane wrote:
> From a performance perspective, the two limitations that I see in
> the current worker implementation are:
>  * We're basically guaranteed to have to do an extra context switch on
>each connection, in order to pass the connection from the listener
>thread to a worker thread.
>  * The passing of a pool from listener to worker makes it very tricky
>to optimize away all the mutexes within the pools.

IIRC, the problem isn't so much the fact that pools may be passed around,
since in that respect they are already threadsafe without mutexes
(at least in the current CVS and in the recent "time-space tradeoff"
patch. I believe the actual problem as you have described it to me is
how destroying a pool requires that the parent be locked. Perhaps you
can better characterize the problem.

> So...please forgive me if this has already been considered and dismissed
> a long time ago, but...why can't the listener and worker be the same thread?
> 
> I'm thinking of a design like this:
> 
>  * There's no dedicated listener thread.
> 
>  * Each worker thread does this:
>  while (!time to shut down) {
>wait for another worker to wake me up;
>if (in shutdown) {
>  exit this thread;
>}
>accept on listen mutex or pipe of death;
>if (pipe of death triggered) {
>  set "in shutdown" flag;
>  wake up all the other workers;
>  exit this thread;
>}
>else {
>  pick another worker and wake it up;
>  handle the connection that I just accepted;
>}
>  }

Not a bad idea. Kind of a hybrid between threaded and worker. One
thing I'd be concerned about is the reliability of condition variable
signal delivery. Can we guarantee that one generated signal is caught
by one-and-only-one worker? What happens if none of the workers are able
to catch the signal, is the "token" dropped? The queue might have to
maintain that state if it becomes empty, but I think it's doable.

[/tangent]
I do agree that the next step in optimizing any threaded MPM is to
reduce the number of necessary context switches. As for a Solaris-
specific implementation, one thing that I've been meaning to implement
for a long time is an MPM based on doors. Doors would allow us to have a
listener process that delegates out connections to child processes that
have managed pools of threads. There are quite a few advantages to this
model, the first being that door calls can transfer kthread assignment
in mid-quantum _across processes_. We can also manage a very high level
of performance feedback from each of the child processes through door
calls (ie. calls to report on various resource metrics) so we can better
distribute the load. The unfortunate thing about this model would be that
it is Solaris specific, but having a killer-app for doors might encourage
other kernel developers to implement it (there are some patches around
for doors on linux, for example). Thoughts? I would target this more
long-term, like httpd-2.1 perhaps?

-aaron



Re: [STATUS] (httpd-2.0) Wed Nov 28 23:45:08 EST 2001

2001-11-29 Thread TOKILEY


In a message dated 11/29/2001 3:23:27 AM Pacific Standard Time, 
[EMAIL PROTECTED] writes:

> > As described by Ken?  Once again, what would he have to do
>  > with that?
>  
>  I just happen to be the chap with the cron job that sends
>  the current STATUS file every Wednesday.  I don't maintain it;
>  that's a shared responsibility (and one of its deficiencies,
>  IMHO, but I've ranted about that before :-).

Then even more apologies are due and are hereby given.

I thought that the 'final editing' of the auto-broadcast
was someone's ongoing task ( yours ).

Then I have no idea who added...

+1 Cliff ( there's now another candidate to be evaluated )

...or why that comment was so ambiguous.

I really did 'miss' a message somewhere and I thought some
guy named Cliff ( Wooley? Doesn't say ) submitted something
for consideration as well and mod_gzip was already 'off' the table.

This impression was then substantiated when a few people from 
the mod_gzip forum who were curious about the status of 
the 'mod_gzip 2.0 submission' queried this forum and got 
absolutely no reply from anyone.

Doesn't matter now...

But is it too much for future reference to ask STATUS file
comments to be a little more explicit? It would help others track
what's really happening there at Apache.

Later...
Kevin





Re: worker mpm: can we optimize away the listener thread?

2001-11-29 Thread Ryan Bloom

On Thursday 29 November 2001 09:20 am, Brian Pane wrote:
>  From a performance perspective, the two limitations that I see in
> the current worker implementation are:
>   * We're basically guaranteed to have to do an extra context switch on
> each connection, in order to pass the connection from the listener
> thread to a worker thread.
>   * The passing of a pool from listener to worker makes it very tricky
> to optimize away all the mutexes within the pools.
>
> So...please forgive me if this has already been considered and dismissed
> a long time ago, but...why can't the listener and worker be the same
> thread?

That's where we were before worker, with the threaded MPM.  There are
thread management issues with that model, and it doesn't scale as well.

Ryan

__
Ryan Bloom  [EMAIL PROTECTED]
Covalent Technologies   [EMAIL PROTECTED]
--



worker mpm: can we optimize away the listener thread?

2001-11-29 Thread Brian Pane

 From a performance perspective, the two limitations that I see in
the current worker implementation are:
  * We're basically guaranteed to have to do an extra context switch on
each connection, in order to pass the connection from the listener
thread to a worker thread.
  * The passing of a pool from listener to worker makes it very tricky
to optimize away all the mutexes within the pools.

So...please forgive me if this has already been considered and dismissed
a long time ago, but...why can't the listener and worker be the same thread?

I'm thinking of a design like this:

  * There's no dedicated listener thread.

  * Each worker thread does this:
  while (!time to shut down) {
wait for another worker to wake me up;
if (in shutdown) {
  exit this thread;
}
accept on listen mutex or pipe of death;
if (pipe of death triggered) {
  set "in shutdown" flag;
  wake up all the other workers;
  exit this thread;
}
else {
  pick another worker and wake it up;
  handle the connection that I just accepted;
}
  }

--Brian





Re: [STATUS] (httpd-2.0) Wed Nov 28 23:45:08 EST 2001

2001-11-29 Thread TOKILEY


In a message dated 11/29/2001 3:23:32 AM Pacific Standard Time, 
[EMAIL PROTECTED] writes:

> "William A. Rowe, Jr." wrote:
>  > 
>  > What is the http content-encoding value for this facility?  deflate
>  > Ergo, mod_deflate.
>  
>  And the name change from mod_gz to mod_deflate was suggested
>  by Roy, whom I think knows HTTP better than anyone else here..

Knowing HTTP is one thing... knowing compression formats is another.

Does the output of mod_deflate have a GZIP and/or ZLIB header on it, or not?
Even those 2 headers are NOT the same but that's yet another story.

If it does... then it's not really 'pure deflate'.
If it doesn't... then it MIGHT be pure deflate but it won't do 
squat in most legacy and/or modern browsers.

Yours...
Kevin



Re: [PATCH] apache-1.3/src/os/netware/os.c

2001-11-29 Thread Brad Nicholes

Pavel,
Your patch looks good.  It looks like a much cleaner solution. 
What version of Winsock have you tested this patch with?  Did you try it
on NW6?  As soon as I get some time to implement it and test it myself,
I will get it checked in.

thanks,
Brad


>>> [EMAIL PROTECTED] Wednesday, November 28, 2001 7:50:57 AM >>>
Hi,
attached patch to fix invalid redirections from 
"https://some_site/some_location"; to
"http://some_site/some_location/";.

Pavel



[PATCH] optimizations for pools and threads

2001-11-29 Thread Brian Pane

I just tried out a simpler approach to fixing the mutex contention
within pool cleanups.  It seems to work reasonably well, so I'm
presenting it here for feedback.

This patch does three things:

  * Optimize away mutexes in the case of subrequest pool
deletion (an important special case because it's in requests
with a lot of subrequests, like SSI pages, that we see the
worst mutex overhead)

  * Add support for a "free list cache" within a pool. If this is
enabled (on a per-pool basis), blocks used for the pool's descendants
will be put into this private free list rather than the global one
when the descendants are destroyed.  Subsequent subpool creation
for this parent pool can take advantage of the pool's free list cache
to bypass the global free list and its mutex.  (This is useful for
things like mod_include.)

  * Switch to the new lock API and use nonrecursive mutexes.
(Thanks to Aaron for this suggestion.  According to profiling
data, the old lock API spends literally half its time in
apr_os_thread_equal() and apr_os_thread_current().)

This patch removes essentially all the pool-related mutex operations
in prefork, and a lot of the mutex ops in worker.

--Brian




Index: server/mpm/prefork/prefork.c
===
RCS file: /home/cvs/httpd-2.0/server/mpm/prefork/prefork.c,v
retrieving revision 1.223
diff -u -r1.223 prefork.c
--- server/mpm/prefork/prefork.c2001/11/29 04:06:05 1.223
+++ server/mpm/prefork/prefork.c2001/11/29 14:41:55
@@ -559,7 +559,8 @@
  * we can have cleanups occur when the child exits.
  */
 apr_pool_create(&pchild, pconf);
-
+apr_pool_set_options(pchild, APR_POOL_OPT_SINGLE_THREADED |
+ APR_POOL_OPT_CACHE_FREELIST);
 apr_pool_create(&ptrans, pchild);
 
 /* needs to be done before we switch UIDs so we have permissions */
Index: server/mpm/worker/worker.c
===
RCS file: /home/cvs/httpd-2.0/server/mpm/worker/worker.c,v
retrieving revision 1.43
diff -u -r1.43 worker.c
--- server/mpm/worker/worker.c  2001/11/22 05:13:29 1.43
+++ server/mpm/worker/worker.c  2001/11/29 14:41:56
@@ -641,6 +641,8 @@
 if (!workers_may_exit) {
 /* create a new transaction pool for each accepted socket */
 apr_pool_create(&ptrans, tpool);
+apr_pool_set_options(ptrans, APR_POOL_OPT_SINGLE_THREADED |
+ APR_POOL_OPT_CACHE_FREELIST);
 
 rv = lr->accept_func(&csd, lr, ptrans);
 
Index: srclib/apr/include/apr_pools.h
===
RCS file: /home/cvspublic/apr/include/apr_pools.h,v
retrieving revision 1.63
diff -u -r1.63 apr_pools.h
--- srclib/apr/include/apr_pools.h  2001/11/09 17:50:48 1.63
+++ srclib/apr/include/apr_pools.h  2001/11/29 14:41:57
@@ -363,6 +363,32 @@
 apr_pool_t *pparent,
 int (*apr_abort)(int retcode));
 
+/* Options for use with apr_pool_set_options: */
+
+/** Optimize a pool for use with a single thread only */
+#define APR_POOL_OPT_SINGLE_THREADED0x1
+/** Maintain a cache of free blocks in a pool for use in creating subpools */
+#define APR_POOL_OPT_CACHE_FREELIST 0x2
+
+/**
+ * Set performance tuning options for a pool
+ * @param p The pool
+ * @param flags The APR_POOL_OPT_* flags to enable for the pool,
+ *  OR'ed together
+ * @remark Options set with this function are inherited by subsequently
+ * created child pools, although the children can override the
+ * settings.
+ */
+APR_DECLARE(void) apr_pool_set_options(apr_pool_t *p, apr_uint32_t flags);
+
+/**
+ * Retrieve the performance tuning options for a pool
+ * @param p The pool
+ * @return The APR_POOL_OPT_* flags that are enabled for the pool,
+ * OR'ed together
+ */
+APR_DECLARE(apr_uint32_t) apr_pool_get_options(const apr_pool_t *p);
+
 /**
  * Register a function to be called when a pool is cleared or destroyed
  * @param p The pool register the cleanup with 
Index: srclib/apr/memory/unix/apr_pools.c
===
RCS file: /home/cvspublic/apr/memory/unix/apr_pools.c,v
retrieving revision 1.117
diff -u -r1.117 apr_pools.c
--- srclib/apr/memory/unix/apr_pools.c  2001/11/23 16:47:52 1.117
+++ srclib/apr/memory/unix/apr_pools.c  2001/11/29 14:41:58
@@ -67,7 +67,7 @@
 #include "apr_general.h"
 #include "apr_pools.h"
 #include "apr_lib.h"
-#include "apr_lock.h"
+#include "apr_thread_mutex.h"
 #include "apr_hash.h"
 
 #if APR_HAVE_STDIO_H
@@ -207,6 +207,15 @@
 int (*apr_abort)(int retcode);
 /** A place to hold user data associated with this pool */
 struct apr_hash_t *prog_data;
+/** Tuning options for this pool */
+int flags;
+/** Optiona

Re: cvs commit: httpd-2.0/support ab.c

2001-11-29 Thread Jeff Trawick

"William A. Rowe, Jr." <[EMAIL PROTECTED]> writes:

> Thank goodness for compilers who can read xprintf syntax, and thanks for taking

and thank goodness for cron and unattended updates/builds that
compare old make.stderr with new make.stderr and send e-mail as
appropriate :)

-- 
Jeff Trawick | [EMAIL PROTECTED] | PGP public key at web site:
   http://www.geocities.com/SiliconValley/Park/9289/
 Born in Roswell... married an alien...



Re: cvs commit: httpd-2.0/support ab.c

2001-11-29 Thread William A. Rowe, Jr.

Thank goodness for compilers who can read xprintf syntax, and thanks for taking
a few minutes on this, Jeff.

Bill

- Original Message - 
From: <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Thursday, November 29, 2001 5:30 AM
Subject: cvs commit: httpd-2.0/support ab.c


> trawick 01/11/29 03:30:57
> 
>   Modified:support  ab.c
>   Log:
>   "totalcon / requests" is no longer double either, so %5e doesn't fly





RE: [STATUS] (httpd-2.0) Wed Nov 28 23:45:08 EST 2001

2001-11-29 Thread Sander Striker

> From: Rodent of Unusual Size [mailto:[EMAIL PROTECTED]]
> Sent: 29 November 2001 12:26
> "William A. Rowe, Jr." wrote:
> > 
> > As described by Ken?  Once again, what would he have to do
> > with that?
> 
> I just happen to be the chap with the cron job that sends
> the current STATUS file every Wednesday.  I don't maintain it;
> that's a shared responsibility (and one of its deficiencies,
> IMHO, but I've ranted about that before :-).

Just a quick note: the cronjob could be run by a special account
specifically for this (and probably other automated stuff).

Sander




Re: [STATUS] (httpd-2.0) Wed Nov 28 23:45:08 EST 2001

2001-11-29 Thread Rodent of Unusual Size

"William A. Rowe, Jr." wrote:
> 
> What is the http content-encoding value for this facility?  deflate
> Ergo, mod_deflate.

And the name change from mod_gz to mod_deflate was suggested
by Roy, whom I think knows HTTP better than anyone else here..
-- 
#kenP-)}

Ken Coar, Sanagendamgagwedweinini  http://Golux.Com/coar/
Author, developer, opinionist  http://Apache-Server.Com/

"All right everyone!  Step away from the glowing hamburger!"



Re: [STATUS] (httpd-2.0) Wed Nov 28 23:45:08 EST 2001

2001-11-29 Thread Rodent of Unusual Size

"William A. Rowe, Jr." wrote:
> 
> As described by Ken?  Once again, what would he have to do
> with that?

I just happen to be the chap with the cron job that sends
the current STATUS file every Wednesday.  I don't maintain it;
that's a shared responsibility (and one of its deficiencies,
IMHO, but I've ranted about that before :-).
-- 
#kenP-)}

Ken Coar, Sanagendamgagwedweinini  http://Golux.Com/coar/
Author, developer, opinionist  http://Apache-Server.Com/

"All right everyone!  Step away from the glowing hamburger!"



Re: [STATUS] (httpd-2.0) Wed Nov 28 23:45:08 EST 2001

2001-11-29 Thread TOKILEY


Hello William...
This is Kevin Kiley again...

See comments inline below...

In a message dated 11/28/2001 10:59:26 PM Pacific Standard Time, 
[EMAIL PROTECTED] writes:

> From: <[EMAIL PROTECTED]>
>  Sent: Thursday, November 29, 2001 12:30 AM
>  
>  > In a message dated 11/28/2001 10:21:46 PM Pacific Standard Time, 
>  > [EMAIL PROTECTED] writes:
>  > 
>  > >  If you have any doubts about why sometimes submissions aren't 
> considered 
>  > >  for inclusion in any open source project ... well there you have it.
>  > 
>  > Ya know what...
>  > I am going to stand still for this little spanking session because
>  > I am willing to admit when I have made a mistake and unlike most 
>  > times before on this forum when you guys have tried to make a
>  > punching bag out of me... this time I give you permission to fire away.
>  
>  > Ok... now on to the next question ( already asked by someone else )
>  > 
>  > Whatever happened to the 'other candidate for submission' as
>  > described by Coar? I assume that was mod_gzip itself?
>  
>  As described by Ken?  Once again, what would he have to do with that?

Nothing, really, other than the fact that the only reason I asked
is that he is the one who updated the status file to read...

+1 Cliff ( there's now another candidate to be evaluated )

...and failed to actually mention what that 'other candidate' really was.

I believe I missed any messages from a 'Cliff' and so I was never
sure myself what that was all about since it wasn't clear in the STATUS.

I have never really been sure what happened to the complete 
working mod_gzip for Apache 2.0 that was (freely) submitted 
( after both public and private urgings by Apache developers )
If that's what Ken's note was really referring to then OK but I 
was personally never sure since it wasn't specific. I thought 
maybe this guy Cliff submitted something, too. 

I remember mod_gzip for Apache 2.0 was immediately hacked upon 
right after submission by some people ( Justin? Ian? Don't remember ) and
they started removing features without even fully reading the source code
and/or understanding what they were for ( and then put some of them back 
after 
I explained some things ) but all of that work just died out into silence and 
the STATUS file became the only remnant of the whole firestorm that Justin
started by asking for Ian's mod_gz to be dumped into the tree ASAP.

A LOT of folks on the mod_gzip forum caught the whole discussion
at Apache and started asking us 'Is that 'other candidate really
mod_gzip for 2.0 or is it 'something else' and our response was
always 'We do not know for sure... ask them'.

And (FYI) a few people came to Apache and DID ask 'What is
the staus of mod_gzip version 2.0?' and no one even ack'ed
their messages so we assumed it wasn't even being considered.
  
>  jwoolley01/09/15 12:18:59
>  
>Modified:.STATUS
>Log:
>A chilly day in Charlottesville...
>
>Revision  ChangesPath
>1.294 +3 -2  httpd-2.0/STATUS
>  [snip]
>@@ -117,7 +117,8 @@
>   and in-your-face.)  This proposed change would not depricate 
Alias.
> 
> * add mod_gz to httpd-2.0 (in modules/experimental/)
>-  +1: Greg, Justin, Cliff, ben, Ken, Jeff
>+  +1: Greg, Justin, ben, Ken, Jeff
>+   0: Cliff (there's now another candidate to be evaluated)
>0: Jim (premature decision at present, IMO)
>   -0: Doug, Ryan
>   
>  Kevin... I believe I've generally treated you civilly... Read the end of my
>  previous response above.  A good number of non-combatants to these 'gzip 
> wars'
>  are really disgusted with the language and attitude on list.  Much of that
>  has turned on your comments and hostility.

If anyone really views a few 'heated exchanges' on a public forum over
some specific technology issues as a 'war' then I'm sorry but I still won't 
apologize for being passionate about something and willing to argue/defend it.

Email is a strange medium. Some people take it way too seriously, methinks.

>  In accepting a contribution, the submitter is generally expected to support
>  the submission, ongoing.  

Okay... mind blown... that is the exact OPPOSITE of the argument
that I beleive even YOU were making during the 'Please why won't
you submit mod_gzip for 2.0 before we go BETA' exchanges. One
of the arguments I made ( Capital I for emphasis ) was that if I
was going to 'support' it I wanted to see at least one good beta
of Apache 2.0 before the submission was made. Strings of arguments
came right back saying "That should NOT be your concern... if you
submit mod_gzip for Apache 2.0 then WE will support it, not YOU".

Seriously... check the threads if you have time... that fact that Apache
would NOT be relying on us to support it was one of the 'arm twisting'
arguments that was made to try and get us to submit the code BEFORE
Beta so that Ian's mod_gz wouldn't be the 'only choice'.

> Everyone here enjoys work