Recent perl-frameworks broke 1.3 testing..?

2002-01-03 Thread Rodent of Unusual Size
Maybe it's because of all the attention on 2.0, but suddenly
t/TEST is hanging when run against a 1.3 server.  This is
new behaviour since 2 December 2001, when it was working
fine.

What I get now is:

% t/TEST
setting ulimit to allow core files
ulimit -c unlimited
 exec t/TEST -v apache/etags
/tmp/ap1/bin/httpd -X -d /home/coar/httpd-test/perl-framework/t -f 
/home/coar/httpd-test/perl-framework/t/conf/httpd.conf -DAPACHE1 
using Apache/1.3.23-dev 
waiting for server to start: ok (waited 0 secs)
server localhost:8529 started
server localhost:8530 listening (mod_headers)
server localhost:8531 listening (mod_proxy)
server localhost:8532 listening (mod_vhost_alias)

and there it sits.  It sat there for longer than ten minutes.
This is repeatable, and I don't know what it's waiting for --
I can telnet to localhost:8529 and 'HEAD / HTTP/1.0' just
fine.

This is with a completely vanilla checkout of the test
framework, and a slightly patched server.  I *believe*
I tested it with a completely vanilla server, too, but
I'm not positive.

Sorry I'm not using that bug_report.pl (or whatever it
was) script, Stas, but it doesn't appear to be part of
the httpd-test repository.  Apologies also for not being
much help in debugging these things, but I just can't
trace my way through the Deep Magic variety of Perl
used here..
-- 
#kenP-)}

Ken Coar, Sanagendamgagwedweinini  http://Golux.Com/coar/
Author, developer, opinionist  http://Apache-Server.Com/

All right everyone!  Step away from the glowing hamburger!


Re: Recent perl-frameworks broke 1.3 testing..?

2002-01-03 Thread William A. Rowe, Jr.
From: Rodent of Unusual Size [EMAIL PROTECTED]
Sent: Wednesday, January 02, 2002 10:47 PM


 Maybe it's because of all the attention on 2.0, but suddenly
 t/TEST is hanging when run against a 1.3 server.  This is
 new behaviour since 2 December 2001, when it was working
 fine.

t/TEST -d=lwp 2

should show you where things stall, in a bit more detail.



Re: Recent perl-frameworks broke 1.3 testing..?

2002-01-03 Thread Stas Bekman

Sorry I'm not using that bug_report.pl (or whatever it
was) script, Stas, but it doesn't appear to be part of
the httpd-test repository.  Apologies also for not being
much help in debugging these things, but I just can't
trace my way through the Deep Magic variety of Perl
used here..
It's moved. Now it's autogenerated and named t/REPORT.
Whenever you have a hanging problem in Perl, the solution is very 
simple. Put into your code this:

  use Carp ();
  $SIG{'USR2'} = sub {
 Carp::confess(caught SIGUSR2!);
  };
and then kill the process with:
% kill -USR2 PID
And the printed trace will tell you exactly where the code hangs.
_
Stas Bekman JAm_pH  --   Just Another mod_perl Hacker
http://stason.org/  mod_perl Guide   http://perl.apache.org/guide
mailto:[EMAIL PROTECTED]  http://ticketmaster.com http://apacheweek.com
http://singlesheaven.com http://perl.apache.org http://perlmonth.com/


Cookie Patch for Flood

2002-01-03 Thread Chris Williams
I found a bug in flood_round_robin.c.  The apr_pstrcat on line 146 should
have NULL as the last argument.  I am new to submitting patches so if
someone could let me know the correct way to do it, I will repost.  Without
this, you get a bunch of garbage in the cookie string if there is more that
one cookie.

Thanks
Chris



Re: Cookie Patch for Flood

2002-01-03 Thread Aaron Bannert
On Thu, Jan 03, 2002 at 09:43:48AM -0500, Chris Williams wrote:
 I found a bug in flood_round_robin.c.  The apr_pstrcat on line 146 should
 have NULL as the last argument.  I am new to submitting patches so if
 someone could let me know the correct way to do it, I will repost.  Without
 this, you get a bunch of garbage in the cookie string if there is more that
 one cookie.

Thanks for the find! Typically patches are posted as context diffs. If
you can manage it with your mail, it's best to attach the patches inline
(as opposed to a mime/uuencoded attachment).

There's some good info on this site for developers: http://dev.apache.org/
You'll be particular interested in: http://dev.apache.org/patches.html

So is this what you meant?


Index: flood_round_robin.c
===
RCS file: /home/cvs/httpd-test/flood/flood_round_robin.c,v
retrieving revision 1.19
diff -u -u -r1.19 flood_round_robin.c
--- flood_round_robin.c 3 Oct 2001 01:24:01 -   1.19
+++ flood_round_robin.c 3 Jan 2002 14:55:43 -
@@ -143,7 +143,7 @@
 while (cook)
 {
 if (cook != p-cookie)
-cookies = apr_pstrcat(p-pool, cookies, ;);
+cookies = apr_pstrcat(p-pool, cookies, ;, NULL);
 
 cookies = apr_pstrcat(p-pool, cookies, cook-name, =, 
   cook-value, NULL);





RE: Cookie Patch for Flood

2002-01-03 Thread Chris Williams
Yes it is.  I will review those links for the next time.
Thanks!
Chris

 -Original Message-
 From: Aaron Bannert [mailto:[EMAIL PROTECTED]
 Sent: Thursday, January 03, 2002 9:56 AM
 To: [EMAIL PROTECTED]
 Subject: Re: Cookie Patch for Flood


 On Thu, Jan 03, 2002 at 09:43:48AM -0500, Chris Williams wrote:
  I found a bug in flood_round_robin.c.  The apr_pstrcat on line
 146 should
  have NULL as the last argument.  I am new to submitting patches so if
  someone could let me know the correct way to do it, I will
 repost.  Without
  this, you get a bunch of garbage in the cookie string if there
 is more that
  one cookie.

 Thanks for the find! Typically patches are posted as context diffs. If
 you can manage it with your mail, it's best to attach the patches inline
 (as opposed to a mime/uuencoded attachment).

 There's some good info on this site for developers: http://dev.apache.org/
 You'll be particular interested in: http://dev.apache.org/patches.html

 So is this what you meant?


 Index: flood_round_robin.c
 ===
 RCS file: /home/cvs/httpd-test/flood/flood_round_robin.c,v
 retrieving revision 1.19
 diff -u -u -r1.19 flood_round_robin.c
 --- flood_round_robin.c   3 Oct 2001 01:24:01 -   1.19
 +++ flood_round_robin.c   3 Jan 2002 14:55:43 -
 @@ -143,7 +143,7 @@
  while (cook)
  {
  if (cook != p-cookie)
 -cookies = apr_pstrcat(p-pool, cookies, ;);
 +cookies = apr_pstrcat(p-pool, cookies, ;, NULL);

  cookies = apr_pstrcat(p-pool, cookies, cook-name, =,
cook-value, NULL);






flood's handling of pool lifetimes/data scoping

2002-01-03 Thread Aaron Bannert
I've been thinking about how to handle data lifetime scoping in
flood and have come up with a solution, but I want some feedback
before I jump in and do it.

Theoretically, one instance of flood might need data that is scoped
at each of the following lifetimes:

all farms
each farm
all farmers
each farmer
all profiles
each profile

and possibly also:
each use of a urllist
each url in a urllist

and maybe even:
each flood (global per-process level)


Potentially, a lower-level iterator like a profile may want to store
data at a higher level like a farm. I don't see it working the other way
around, since farms don't know much about their farmers other than when
they start and stop. So the question is: How do the lower-level iterators
get access to the higher-level scopes (aka pools)? Since we'll always have
this hierarchy, perhaps we can take advantage of that somehow to allow
a lower level access to it's parent's pool? Ideas?

-aaron


Re: Recent perl-frameworks broke 1.3 testing..?

2002-01-03 Thread Stas Bekman
Found the problem, temporary replace
$child_pid = open $child_in_pipe, |$cmd;
with:
system $cmd ;
in Apache-Test/lib/Apache/TestServer.pm
the way it was before (well sort of, it's not good in failure cases, but 
at least it starts)

this is because of my latest patch to make t/TEST immediately detect 
failures. Any ideas why this doesn't work with 1.3? Something goes wrong 
with the spawned process.

Tomorrow I'll work on it again, good night :)
BTW, how do you build your 1.3? For some reason I don't get support 
utils built with 'make' have to cd to support and make again.

To answer your question I can reproduce the problem with 1.3 now.
_
Stas Bekman JAm_pH  --   Just Another mod_perl Hacker
http://stason.org/  mod_perl Guide   http://perl.apache.org/guide
mailto:[EMAIL PROTECTED]  http://ticketmaster.com http://apacheweek.com
http://singlesheaven.com http://perl.apache.org http://perlmonth.com/


Re: Recent perl-frameworks broke 1.3 testing..?

2002-01-03 Thread Rodent of Unusual Size
Stas Bekman wrote:
 
 Found the problem, temporary replace
 
 $child_pid = open $child_in_pipe, |$cmd;
 
 with:
 
 system $cmd ;
 
 in Apache-Test/lib/Apache/TestServer.pm

Thanks, I'll try that..

 Any ideas why this doesn't work with 1.3? Something goes wrong
 with the spawned process.

Not at the moment; I lack context (like the value of $cmd).

 BTW, how do you build your 1.3? For some reason I don't get support
 utils built with 'make' have to cd to support and make again.

The utils are built by 'make install' not by 'make', ISTR.
-- 
#kenP-)}

Ken Coar, Sanagendamgagwedweinini  http://Golux.Com/coar/
Author, developer, opinionist  http://Apache-Server.Com/

All right everyone!  Step away from the glowing hamburger!


Re: Cookie Patch for Flood

2002-01-03 Thread Justin Erenkrantz
On Thu, Jan 03, 2002 at 06:56:16AM -0800, Aaron Bannert wrote:
 There's some good info on this site for developers: http://dev.apache.org/
 You'll be particular interested in: http://dev.apache.org/patches.html

FWIW, please point people at:

http://www.apache.org/dev/
http://httpd.apache.org/dev/

More specifically:

http://httpd.apache.org/dev/patches.html

Joshua made a commit to change dev.apache.org, but he never updated it.
The front page for dev.apache.org now says it is obsolete.  I wish
that site would just die.  -- justin



Re: Recent perl-frameworks broke 1.3 testing..?

2002-01-03 Thread Stas Bekman
On Fri, 4 Jan 2002, Stas Bekman wrote:

 Found the problem, temporary replace
 
 $child_pid = open $child_in_pipe, |$cmd;
 
 with:
 
 system $cmd ;
 
 in Apache-Test/lib/Apache/TestServer.pm
 
 the way it was before (well sort of, it's not good in failure cases, but 
 at least it starts)
 
 this is because of my latest patch to make t/TEST immediately detect 
 failures. Any ideas why this doesn't work with 1.3? Something goes wrong 
 with the spawned process.

This seems to work with 1.3 and 2.0:

my $pid = fork();
unless ($pid) {
my $status = system $cmd;
if ($status) {
$status  = $?  8;
error httpd didn't start! $status;
}
CORE::exit $status;
}

instead of system $cmd  as it was originally. In this case I can get to 
the return status' value with CHLD sighandler.

Surprisingly for 2.0 it's enough to say:

  $status = system httpd ...;

and everything is cool, since system returns almost immediately. Not with 
1.3, though it restarts the same way (I guess not exactly the same).

please test this patch (against current cvs) and if it's good I'll commit 
it. (it includes all my latest status propagation work, which is not 
committed)

Index: Apache-Test/lib/Apache/TestRun.pm
===
RCS file: 
/home/cvs/httpd-test/perl-framework/Apache-Test/lib/Apache/TestRun.pm,v
retrieving revision 1.80
diff -u -r1.80 TestRun.pm
--- Apache-Test/lib/Apache/TestRun.pm   31 Dec 2001 09:09:43 -  1.80
+++ Apache-Test/lib/Apache/TestRun.pm   3 Jan 2002 19:14:55 -
@@ -17,6 +17,7 @@
 use Config;
 
 use constant STARTUP_TIMEOUT = 300; # secs (good for extreme debug cases)
+use subs qw(exit_shell exit_perl);
 
 my %core_files  = ();
 
@@ -137,7 +138,7 @@
 my @invalid_argv = @{ $self-{argv} };
 if (@invalid_argv) {
 error unknown opts or test names: @invalid_argv;
-exit;
+exit_perl 0;
 }
 
 }
@@ -258,16 +259,17 @@
 return unless $_[0] =~ /^Failed/i; #dont catch Test::ok failures
 $server-stop(1) if $opts-{'start-httpd'};
 $server-failed_msg(error running tests);
+exit_perl 0;
 };
 
 $SIG{INT} = sub {
 if ($caught_sig_int++) {
 warning \ncaught SIGINT;
-exit;
+exit_perl 0;
 }
 warning \nhalting tests;
 $server-stop if $opts-{'start-httpd'};
-exit;
+exit_perl 0;
 };
 
 #try to make sure we scan for core no matter what happens
@@ -383,17 +385,19 @@
 for (@exit_opts) {
 next unless exists $self-{opts}-{$_};
 my $method = opt_$_;
-exit if $self-$method();
+exit_perl $self-$method();
 }
 
 if ($self-{opts}-{'stop-httpd'}) {
+my $ok = 1;
 if ($self-{server}-ping) {
-$self-{server}-stop;
+$ok = $self-{server}-stop;
+$ok = $ok  0 ? 0 : 1; # adjust to 0/1 logic
 }
 else {
 warning server $self-{server}-{name} is not running;
 }
-exit;
+exit_perl $ok ;
 }
 }
 
@@ -407,7 +411,7 @@
   ($test_config-{APXS} ?
an apxs other than $test_config-{APXS} : apxs).
 or put either in your PATH;
-exit 1;
+exit_perl 0;
 }
 
 my $opts = $self-{opts};
@@ -427,7 +431,8 @@
 }
 
 if ($opts-{'start-httpd'}) {
-exit 1 unless $server-start;
+my $status = $server-start;
+exit_perl 0 unless $status;
 }
 elsif ($opts-{'run-tests'}) {
 my $is_up = $server-ping
@@ -436,7 +441,7 @@
  $server-wait_till_is_up(STARTUP_TIMEOUT));
 unless ($is_up) {
 error server is not ready yet, try again.;
-exit;
+exit_perl 0;
 }
 }
 }
@@ -464,7 +469,7 @@
 sub stop {
 my $self = shift;
 
-$self-{server}-stop if $self-{opts}-{'stop-httpd'};
+return $self-{server}-stop if $self-{opts}-{'stop-httpd'};
 }
 
 sub new_test_config {
@@ -491,13 +496,10 @@
 }
 close $sh;
 
-open $sh, |$binsh or die;
-my @cmd = (ulimit -c unlimited\n,
-   exec $0 @ARGV);
-warning setting ulimit to allow core [EMAIL PROTECTED];
-print $sh @cmd;
-close $sh;
-exit; #exec above will take over
+my $command = ulimit -c unlimited; $0 @ARGV;
+warning setting ulimit to allow core files\n$command;
+exec $command;
+die exec $command has failed; # shouldn't be reached
 }
 
 sub set_ulimit {
@@ -548,13 +550,13 @@
 warning forcing Apache::TestConfig object save;
 $self-{test_config}-save;
 warning run 't/TEST -clean' to clean up before continuing;
-exit 1;
+exit_perl 0;
 }
 }
 
 if ($self-{opts}-{configure}) {
 warning reconfiguration done;
-exit;
+exit_perl 1;
 }
 
 

Re: Recent perl-frameworks broke 1.3 testing..?

2002-01-03 Thread Stas Bekman
On Thu, 3 Jan 2002, Rodent of Unusual Size wrote:

 Stas Bekman wrote:
  
  Found the problem, temporary replace
  
  $child_pid = open $child_in_pipe, |$cmd;
  
  with:
  
  system $cmd ;
  
  in Apache-Test/lib/Apache/TestServer.pm
 
 Thanks, I'll try that..
 
  Any ideas why this doesn't work with 1.3? Something goes wrong
  with the spawned process.
 
 Not at the moment; I lack context (like the value of $cmd).

the value of $cmd is what get printed, e.g. with 1.3:

/home/stas/httpd/1.3/bin/httpd -X -d 
/home/stas/apache.org/httpd-test/perl-framework/t -f 
/home/stas/apache.org/httpd-test/perl-framework/t/conf/httpd.conf 
-DAPACHE1 -DPERL_USEITHREADS

now you have the context :) , also see my last email, there is a 
difference between how apache restarts with 1.x and 2.x.
 
  BTW, how do you build your 1.3? For some reason I don't get support
  utils built with 'make' have to cd to support and make again.
 
 The utils are built by 'make install' not by 'make', ISTR.

well they don't seem to get built for me. I have to do it manually. Could 
be some libtool problem. So what are you build args so I can try them?

_
Stas Bekman JAm_pH  --   Just Another mod_perl Hacker
http://stason.org/  mod_perl Guide   http://perl.apache.org/guide
mailto:[EMAIL PROTECTED]  http://ticketmaster.com http://apacheweek.com
http://singlesheaven.com http://perl.apache.org http://perlmonth.com/



Re: [PATCH] mod_proxy truncates status line

2002-01-03 Thread Graham Leggett

Adam Sussman wrote:

  Are you 100% sure the buffer is big enough to do this? If the buffer is
  of size len the zero will be written past the end of the buffer.
 
 
 In the current code, len is strlen(buffer) so it can be safely assumed
 to be one less than the length of the buffer (provided of course that
 ap_proxy_string_read can be trusted).

The contents of a buffer can never be trusted though - this could be
exploited as an overflow and potentially an exploit.

Regards,
Graham
-- 
-
[EMAIL PROTECTED]There's a moon
over Bourbon Street
tonight...


smime.p7s
Description: S/MIME Cryptographic Signature


Re: cvs commit: apache-1.3/src/os/netware ApacheCore.imp

2002-01-03 Thread Pavel Novy

So, here is a patch to fix the issue with the GNU build attached. An 
essential part of NLM dump (unfixed) shows the problem:

  DATA:00012BA0 = config_log_module
  DATA:00012C20 = asis_module
  DATA:00012CA0 = imap_module
  DATA:00012DA0 = setenvif_module
---
  DATA:00013DF0 = ap_server_post_read_config
  DATA:00013E20 = ap_server_config_defines
  DATA:00013E60 = ap_coredump_dir
  DATA:00015E60 = ap_lock_fname
  DATA:00015E80 = ap_bind_address
  DATA:00015F60 = ap_server_pre_read_config
Data Size:   00013220 ( 78368)

All records with offset outside of data size boundary are causing this:

  SERVER-5.00-1554: Invalid public record offset in load file.
  Module APACHEC.NLM NOT loaded

I'm using the ApacheCore.imp as export file when building apachec.nlm 
and suppose it's correct bahaviour...

Pavel

P.S.: An enhancement of the nlmconv utility is much accurate than fix, 
that's what I exactly meant.

Pavel Novy wrote:

 Tested those new changes on build with the GNU tools and it seems that 
 there is a problem with new ApacheCore.imp file. It's not a new one - a 
 nlmconv utility used here for NLM linking is not able to allocate a 
 physical space for uninitialized variables, so if any of such symbols is 
 exported (nlmconv doesn't produce any warning), it's not possible to 
 load a NLM module, then. The core module (apachec.nlm) is affected and 
 the only way to fix this is to change those uninitialized variables to 
 initialized (yes, we also could ask for fix in the nlmconv utility, but 
 it's much harder). I will take a look which variable(s) is(are) causing 
 this and will let you know.
 
 Pavel



--- original/src/main/http_main.c   Fri Dec 28 06:12:03 2001
+++ modified/src/main/http_main.c   Thu Jan  3 05:25:20 2002
@@ -247,9 +247,9 @@
 API_VAR_EXPORT int ap_excess_requests_per_child=0;
 API_VAR_EXPORT char *ap_pid_fname=NULL;
 API_VAR_EXPORT char *ap_scoreboard_fname=NULL;
-API_VAR_EXPORT char *ap_lock_fname;
+API_VAR_EXPORT char *ap_lock_fname=NULL;
 API_VAR_EXPORT char *ap_server_argv0=NULL;
-API_VAR_EXPORT struct in_addr ap_bind_address;
+API_VAR_EXPORT struct in_addr ap_bind_address={};
 API_VAR_EXPORT int ap_daemons_to_start=0;
 API_VAR_EXPORT int ap_daemons_min_free=0;
 API_VAR_EXPORT int ap_daemons_max_free=0;
@@ -309,11 +309,11 @@
 
 API_VAR_EXPORT char ap_server_root[MAX_STRING_LEN]=;
 API_VAR_EXPORT char ap_server_confname[MAX_STRING_LEN]=;
-API_VAR_EXPORT char ap_coredump_dir[MAX_STRING_LEN];
+API_VAR_EXPORT char ap_coredump_dir[MAX_STRING_LEN]=;
 
-API_VAR_EXPORT array_header *ap_server_pre_read_config;
-API_VAR_EXPORT array_header *ap_server_post_read_config;
-API_VAR_EXPORT array_header *ap_server_config_defines;
+API_VAR_EXPORT array_header *ap_server_pre_read_config=NULL;
+API_VAR_EXPORT array_header *ap_server_post_read_config=NULL;
+API_VAR_EXPORT array_header *ap_server_config_defines=NULL;
 
 /* *Non*-shared http_main globals... */
 



Re: cvs commit: httpd-2.0/server core.c

2002-01-03 Thread Bill Stoddard


  On Wed, Jan 02, 2002 at 05:15:34PM -0500, Bill Stoddard wrote:
   This patch breaks the proxy.  Specifically, anyone who uses
ap_proxy_make_fake_req().  Get
   a seg fault in ap_get_limit_req_body because r-per_dir_config is NULL.  I'll 
spend
some
   time on this tomorrow unless someone wants to jump on it tonight.
 
  Is it valid for r-per_dir_config to be null?  Hmm.  I wonder if
  ap_get_limit_req_body should be fixed to handle this case instead
  of ap_http_filter?  -- justin

 No.  It's entirely invalid.

 At the very least - you are looking the r-server-lookup_defaults, plus the
 Location  sections in per_dir_config.

 That's always true, anything that changes that assumption is broken.  Now if
 either proxy or your patch skips the initial Location  lookup (or it is
 otherwise circumvented) then you get what you pay for.

It's not that clear to me what the right solution should be. Checkout
ap_proxy_http_process_response(). This function reads the -response- from the proxied
server and dummies up a request_rec to do so. So is this a valid approach or not? If it
is, then we do not need to do location/directory walks (and it is fine if
r-per_dir_config is NULL.

Bill




Re: cvs commit: httpd-2.0 STATUS

2002-01-03 Thread Aaron Bannert

On Thu, Jan 03, 2002 at 09:53:38AM -, [EMAIL PROTECTED] wrote:
 jerenkrantz02/01/03 01:53:38
 
   Modified:.STATUS
...
   @@ -149,6 +149,18 @@
 hang. My theory is that this has to do with the
 pthread_cond_*() implementation in FreeBSD, but it's still
 possible that it is in APR.
   +Justin adds: Oh, FreeBSD threads are implemented entirely with 
   + select()/poll()/longjmp().  Welcome to the nightmare.
   + So, that means a ktrace output also has the thread 
   + scheduling internals in it (since it is all the same to 
   + the kernel).  Which makes it hard to distinguish between 
   + our select() calls and their select() calls.  
   + *bangs head on wall repeatedly*  But, some of the libc_r 
   + files have a DBG_MSG #define.  This is moderately helpful
   + when used with -DNO_DETACH.  The kernel scheduler isn't 
   + waking up the threads on a select().  Yum.  And, I bet 
   + those decrementing select calls have to do with the 
   + scheduler.  Time to brush up on our OS fundamentals.

Good theory, but in my trace we were only looking at the PID of the parent
process, where there aren't any threads (* technically there is only
1 thread). It is almost certainly a bug somewhere, since consuming CPU
without bounds while performing a non-CPU-intensive task is unexpected
behavior. In our case we run waitpid() followed by select() with a
timeout of 1 second (to emulate a sleep()).

The select()-based threading model just means that it's entirely in userspace
and that the threads are not preemptive. It also means that the request
gets stuck until another one comes along and dislodges it bug you were
seeing is going to happen on ANY platform with non-preemptive threads,
like:

Netware
FreeBSD
Cygwin (I'm guessing, since he saw the same bug)
Anyone using GNU Pth
Anyone using any other full-userspace non-preemptive thread library.

My guess is we're using a blocking call somewhere in worker that is
not posting an event that the select()-based scheduler can use to do a
context switch on.

apr_thread_yield() anyone?

-aaron



RE: [STATUS] (httpd-2.0) Wed Jan 2 23:45:06 EST 2002

2002-01-03 Thread Dave Seidel

Just a reminder, that I and at least one other person (Dwayne Miller
[[EMAIL PROTECTED]]) have reported that we have been unable to get Apache
to run as an NT service when we do our own builds.  Should I add this to a
bug database somewhere?

- Dave

-Original Message-
From: Rodent of Unusual Size [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, January 02, 2002 11:45 PM
To: Apache HTTP server developers
Subject: [STATUS] (httpd-2.0) Wed Jan 2 23:45:06 EST 2002


APACHE 2.0 STATUS:  -*-text-*-
Last modified at [$Date: 2002/01/02 19:34:47 $]

[BIG SNIP]





Re: 2.0.30-dev load spiking [was: upgrade to FreeBSD 4.5-PRERELEASE]

2002-01-03 Thread Greg Ames

Greg Ames wrote:
 
 Brian Pane wrote:

  One more thought:  I graphed the CPU utilization from your vmstat
  output, and the spike is mostly sys time, not usr.  So truss/strace
  data may be helpful--especially if truss on that platform can measure
  the time spent in each syscall.
 
 truss doesn't measure time on FreeBSD according to the man page.  But
 maybe there are a lot more syscalls in the failing case, so I'll try it.

~gregames/2.0.30.truss and ~gregames/2_0_28.truss are available on
daedalus, if you're interested.  The former was created by trussing a
process while running log replay against the 2.0.30 server on port 8092;
the latter was a process on the production server.  

There could be some differences in 2.0.30 due to my test environment: it
listens on two ports (adds a poll() before the accept() ), and I was
running thru a firewall which seems to introduce some network errors.

Greg



Emacs stanza?

2002-01-03 Thread Rodent of Unusual Size

[sent last night, looks like it didn't get through..]

I don't know how many people use Emacs to edit the Apache stuff,
but would anyone object to a stanza at the bottom of the source
files to help put Emacs in the right stylistic mood?  To wit,
something like:

/*
 * Local Variables:
 * mode: C
 * c-file-style: bsd
 * indent-tabs-mode: nil
 * End:
 */

This would set the tab stop to 4, and keep TAB characters from
being embedded, using spaces always.  And it wouldn't do any
harm to non-Emacs users..
-- 
#kenP-)}

Ken Coar, Sanagendamgagwedweinini  http://Golux.Com/coar/
Author, developer, opinionist  http://Apache-Server.Com/

All right everyone!  Step away from the glowing hamburger!



Re: Running Apache in the foreground

2002-01-03 Thread Michael Handler

[ resending because i sent from the wrong From: header last time.
  here's to hoping that this doesn't show up twice. apologies in
  advance if it does... ] 

I'd also like to voice my support for implementing Jos' requested
functionality. NO_DETACH may have originally been intended just for
debugging purposes, but many sites are moving to universal process
managers like djb's daemontools, and it would be extremely useful
to all of us if Apache's httpd provided the necessary knob to run
under svscan  supervise out of the box.

(A Google search for apache daemontools patch reveals patches for
this functionality dating back to at least 1.3.12, which should
give you an idea of how long people have been re-implementing this
for each version. :))

Jos Backus wrote:

 In order for httpd to support this mode of operation it should not fork (and
 decouple itself from its parent) but still run in its own process group, so
 that this pgrp only contains it and its children. A workaround would be to use
 the pgrphack program that comes with daemontools, which looks like this:

Aaron Bannert responded:

 Hmm.. You'll have to ensure by other means that httpd is not a process
 group leader, most likely by ensuring that httpd is started by another
 process and not interactively by the shell.

Correct. The don't-fork-but-still-setsid mode is a specialized
method meant for process controllers only, and it should be
administrator brain damage to try and utilize it from an interactive
shell -- much like invoking a daemon with verbose debug arguments
and then complaining about the output. :) I think putting a note
in the documentation to this effect should be sufficient. Also note
that invoking this mode from shells without job control (Solaris
/bin/sh) works fine, as they don't try and create unique process
groups for each invoked job, AFAICT.

I submitted a patch regarding this issue for 1.3.22 earlier today,
in which I also noted 2.0's issue with setsid(2) regarding this.
(Sorry for the bad line-wrap in the PR.)

http://bugs.apache.org/index.cgi/full/9341

Thanks for your consideration!

--michael

-- 
[EMAIL PROTECTED] (michael handler)   washington, dc



Re: cvs commit: httpd-2.0/server core.c

2002-01-03 Thread Ryan Bloom

On Thursday 03 January 2002 05:16 am, Bill Stoddard wrote:
 
   On Wed, Jan 02, 2002 at 05:15:34PM -0500, Bill Stoddard wrote:
This patch breaks the proxy.  Specifically, anyone who uses
 ap_proxy_make_fake_req().  Get
a seg fault in ap_get_limit_req_body because r-per_dir_config is NULL.  I'll 
spend
 some
time on this tomorrow unless someone wants to jump on it tonight.
  
   Is it valid for r-per_dir_config to be null?  Hmm.  I wonder if
   ap_get_limit_req_body should be fixed to handle this case instead
   of ap_http_filter?  -- justin
 
  No.  It's entirely invalid.
 
  At the very least - you are looking the r-server-lookup_defaults, plus the
  Location  sections in per_dir_config.
 
  That's always true, anything that changes that assumption is broken.  Now if
  either proxy or your patch skips the initial Location  lookup (or it is
  otherwise circumvented) then you get what you pay for.
 
 It's not that clear to me what the right solution should be. Checkout
 ap_proxy_http_process_response(). This function reads the -response- from the proxied
 server and dummies up a request_rec to do so. So is this a valid approach or not? If 
it
 is, then we do not need to do location/directory walks (and it is fine if
 r-per_dir_config is NULL.

We must be able to dummy up request_rec structures in order to use filters
that aren't attached to a request.  I believe that r-per_dir_config should be
allowed to be NULL.

Ryan

__
Ryan Bloom  [EMAIL PROTECTED]
Covalent Technologies   [EMAIL PROTECTED]
--



Re: Running Apache in the foreground

2002-01-03 Thread Aaron Bannert

On Wed, Jan 02, 2002 at 05:57:16PM -0500, Michael Handler wrote:
 I'd also like to voice my support for implementing Jos' requested
 functionality. NO_DETACH may have originally been intended just for
 debugging purposes, but many sites are moving to universal process
 managers like djb's daemontools, and it would be extremely useful
 to all of us if Apache's httpd provided the necessary knob to run
 under svscan  supervise out of the box.

I see no reason why this can't be implemented in apache2, and I'll
even test and commit a patch that properly implements it. :) Sorry
I can't offer much more than that. Maybe if I get some more time
later this week I can look into it, but the more surefire way to get
it in would be to provide a patch.

 Correct. The don't-fork-but-still-setsid mode is a specialized
 method meant for process controllers only, and it should be
 administrator brain damage to try and utilize it from an interactive
 shell -- much like invoking a daemon with verbose debug arguments
 and then complaining about the output. :) I think putting a note
 in the documentation to this effect should be sufficient. Also note
 that invoking this mode from shells without job control (Solaris
 /bin/sh) works fine, as they don't try and create unique process
 groups for each invoked job, AFAICT.

I would just expect the -DFOREGROUND patch to check if httpd is the
process groupleader and error out instead of calling setsid() and
continuing.

 I submitted a patch regarding this issue for 1.3.22 earlier today,
 in which I also noted 2.0's issue with setsid(2) regarding this.
 (Sorry for the bad line-wrap in the PR.)
 
 http://bugs.apache.org/index.cgi/full/9341

Sorry, I can't comment on 1.3, that's not my forté. Perhaps one of the
ol' timers on here have something to say about it? ;)

-aaron



Re: Emacs stanza?

2002-01-03 Thread Rodent of Unusual Size

Bill Stoddard wrote:
 
 -1 (and I'm an emacs user :-)
 
 Metadata should be kept seperate from data. I.e., you should
 put something like this in your _emacs file :-)

Which a) would need to be done by every Emacs user individually,
and b) new users would know nothing about.

Whatever.  It just seems to me that adding a few lines to the
source files would save us the inevitable and repetitious
pain of style-fixup patches and their effects on people's
work-in-process..  Other ASF projects do this (such as
PHP), so it wouldn't be a new departure.
-- 
#kenP-)}

Ken Coar, Sanagendamgagwedweinini  http://Golux.Com/coar/
Author, developer, opinionist  http://Apache-Server.Com/

All right everyone!  Step away from the glowing hamburger!



Re: Emacs stanza?

2002-01-03 Thread Thomas Eibner

On Thu, Jan 03, 2002 at 08:52:05AM -0800, Bruce Korb wrote:
 Rodent of Unusual Size wrote:
 
  It ... would save us the inevitable and repetitious
  pain of style-fixup patches and their effects on people's
  work-in-process..  Other ASF projects do this (such as
  PHP), so it wouldn't be a new departure.
 
 Another approach, used by postgress, is to have a standardized
 set of arguments for ``indent''.  Regular runs of that fix both
 vi users and x-pasted text.  I use the stanza for my stuff, too.

But it's still kinda ugly having to commit indent fixes.

http://httpd.apache.org/dev/styleguide.html has the indent arguments.

-- 
  Thomas Eibner http://thomas.eibner.dk/ DnsZone http://dnszone.org/
  mod_pointer http://stderr.net/mod_pointer 




Re: 2.0.30-dev load spiking [was: upgrade to FreeBSD 4.5-PRERELEASE]

2002-01-03 Thread Greg Ames

Aaron Bannert wrote:

 Here's a syscall count printed side-by-side:

Thanks much, Aaron.  But we have to be careful - this definately isn't
an apples-to-apples comparison.

 2.0.282.0.30
 
 1696 sendfile 1180 gettimeofday
  920 select   805 read
  355 open 579 open
  322 gettimeofday 577 fcntl
  314 read 359 close
  287 lstat260 getrusage
  199 stat 232 stat
  133 close206 select
  114 getrusage156 writev
  109 fstat134 setsockopt
  102 write134 poll
  100 getdirentrie 134 getsockname
   72 fcntl134 accept
   55 writev   130 write
   50 lseek129 fstat
   50 fstatfs  127 shutdown
   11 shutdown 113 lstat
   11 setsockopt   80 munmap
   11 getsockname  80 mmap
   11 accept   72 getdirentries
8 munmap   36 lseek
8 mmap 36 fstatfs
   18 sendfile
   11 SIGNAL
3 pipe
3 break
1 wait4
1 fork
 
 At first the sendfile difference jumped out at me, perhaps we're doing
 something different in how we decide when to use sendfile? 

well, I think that's something wrong with my log replay setup.  It looks
like sendfile sends the first chunk of data, then I loose the connection
(ergo far fewer sendfile calls) and we get SIGPIPE.  

 Granted, this is not at all under
 the same workload, but I'm assuming that at least one of the load spikes
 was captured in the 2.0.30 trace.

It's just log replay, hoping I might trigger the bad behavior without
having to bounce the live server again.

 
 The other thing that jumps out at me is the existance of 11 SIGNALS
 in the 2.0.30 trace. How often would we expect SIGNAL to occur under
 normal conditions?

SIGPIPE on network connections mostly; could be my firewall.  You see
that a fair amount live, but not at all in the 2_0_28 truss.

 Also, this is not normalized, but the total syscall count for each is
 not that far off:
 
 aaron@daedalus% wc -l ~gregames/2.0.30.truss ~gregames/2_0_28.truss   ~
 5731 /home/gregames/2.0.30.truss
 4938 /home/gregames/2_0_28.truss

That was me looking at the size of the file as I was running log
replay.  I quit when they were close to the same.

As I mentioned in another post, it looks like there's something very
funky going on with reads from a cgi pipe.  Look for
www.apache.org/dyn/closer.cgi in the 2.0.30 truss.

Greg



Re: 2.0.30-dev load spiking [was: upgrade to FreeBSD 4.5-PRERELEASE]

2002-01-03 Thread Aaron Bannert

On Thu, Jan 03, 2002 at 11:56:26AM -0500, Greg Ames wrote:
 I do see some weirdness in 2.0.30 with www.apache.org/dyn/closer.cgi -
 it looks like we're doing one byte reads from the pipe to the cgi.  I
 don't know yet if 2_0_28 does the same.  That's all I've spotted so far,
 except for a couple of annoying extra syscalls that also exist in
 2_0_28.

I don't think that would spike the run queue more than 1, no?

Of course, one-byte reads from a cgi pipe are just plain bogus
to begin with, but on a different scale methinks.

-aaron



Re: cvs commit: httpd-2.0/server core.c

2002-01-03 Thread Bill Stoddard

Here is the problem with this patch... (or with proxy's use of HTTP_IN)...

ap_http_filter is called to read -responses- from the proxied server. This patch makes 
an
implicit assumption that HTTP_IN is only being used to read requests. So, we either 
need
to create a whole new filter stack for reading proxy responses or we need to keep all 
code
that is specific to either requests or responses out of HTTP_IN. I am leaning in the
direction of creating a new filter, PROXIED_RESPONSE_IN or something or the other.

Bill

 This patch breaks the proxy.  Specifically, anyone who uses ap_proxy_make_fake_req().
Get
 a seg fault in ap_get_limit_req_body because r-per_dir_config is NULL.  I'll spend 
some
 time on this tomorrow unless someone wants to jump on it tonight.

 Bill

 - Original Message -
 From: [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 Sent: Wednesday, January 02, 2002 2:56 AM
 Subject: cvs commit: httpd-2.0/server core.c


  jerenkrantz02/01/01 23:56:25
 
Modified:.CHANGES
 include  http_core.h
 modules/http http_protocol.c
 server   core.c
Log:
Fix LimitRequestBody directive by moving the relevant code from
ap_*_client_block to ap_http_filter (aka HTTP_IN).  This is the
only appropriate place for limit checking to occur (otherwise,
chunked input is not correctly limited).
 
Also changed the type of limit_req_body to apr_off_t to match the
other types inside of HTTP_IN.  Also made the strtol call for
limit_req_body a bit more robust.
 
Revision  ChangesPath
1.499 +4 -0  httpd-2.0/CHANGES
 
Index: CHANGES
===
RCS file: /home/cvs/httpd-2.0/CHANGES,v
retrieving revision 1.498
retrieving revision 1.499
diff -u -r1.498 -r1.499
--- CHANGES 31 Dec 2001 21:03:12 - 1.498
+++ CHANGES 2 Jan 2002 07:56:24 - 1.499
@@ -1,4 +1,8 @@
 Changes with Apache 2.0.30-dev
+
+  *) Fix LimitRequestBody directive by placing it in the HTTP
+ filter.  [Justin Erenkrantz]
+
   *) Fix mod_proxy seg fault when the proxied server returns
  an HTTP/0.9 response or a bogus status line.
  [Adam Sussman]
 
 
 
1.58  +3 -3  httpd-2.0/include/http_core.h
 
Index: http_core.h
===
RCS file: /home/cvs/httpd-2.0/include/http_core.h,v
retrieving revision 1.57
retrieving revision 1.58
diff -u -r1.57 -r1.58
--- http_core.h 1 Jan 2002 20:36:18 - 1.57
+++ http_core.h 2 Jan 2002 07:56:24 - 1.58
@@ -234,9 +234,9 @@
  * Return the limit on bytes in request msg body
  * @param r The current request
  * @return the maximum number of bytes in the request msg body
- * @deffunc unsigned long ap_get_limit_req_body(const request_rec *r)
+ * @deffunc apr_off_t ap_get_limit_req_body(const request_rec *r)
  */
-AP_DECLARE(unsigned long) ap_get_limit_req_body(const request_rec *r);
+AP_DECLARE(apr_off_t) ap_get_limit_req_body(const request_rec *r);
 
 /**
  * Return the limit on bytes in XML request msg body
@@ -471,7 +471,7 @@
 #ifdef RLIMIT_NPROC
 struct rlimit *limit_nproc;
 #endif
-unsigned long limit_req_body;  /* limit on bytes in request msg body */
+apr_off_t limit_req_body;  /* limit on bytes in request msg body */
 long limit_xml_body;   /* limit on bytes in XML request msg body */
 
 /* logging options */
 
 
 
1.383 +33 -11httpd-2.0/modules/http/http_protocol.c
 
Index: http_protocol.c
===
RCS file: /home/cvs/httpd-2.0/modules/http/http_protocol.c,v
retrieving revision 1.382
retrieving revision 1.383
diff -u -r1.382 -r1.383
--- http_protocol.c 6 Dec 2001 02:57:19 - 1.382
+++ http_protocol.c 2 Jan 2002 07:56:24 - 1.383
@@ -510,6 +510,8 @@
 
 typedef struct http_filter_ctx {
 apr_off_t remaining;
+apr_off_t limit;
+apr_off_t limit_used;
 enum {
 BODY_NONE,
 BODY_LENGTH,
@@ -536,6 +538,9 @@
 const char *tenc, *lenp;
 f-ctx = ctx = apr_palloc(f-r-pool, sizeof(*ctx));
 ctx-state = BODY_NONE;
+ctx-remaining = 0;
+ctx-limit_used = 0;
+ctx-limit = ap_get_limit_req_body(f-r);
 
 tenc = apr_table_get(f-r-headers_in, Transfer-Encoding);
 lenp = apr_table_get(f-r-headers_in, Content-Length);
@@ -562,6 +567,18 @@
 ctx-state = BODY_LENGTH;
 ctx-remaining = atol(lenp);
 }
+
+/* If we have a limit in effect and we know the C-L ahead of
+ * time, stop it here if it is invalid.
+ */
+ 

Re: cvs commit: httpd-2.0/server core.c

2002-01-03 Thread Ryan Bloom

On Thursday 03 January 2002 09:50 am, Bill Stoddard wrote:
 Here is the problem with this patch... (or with proxy's use of HTTP_IN)...
 
 ap_http_filter is called to read -responses- from the proxied server. This patch 
makes an
 implicit assumption that HTTP_IN is only being used to read requests. So, we either 
need
 to create a whole new filter stack for reading proxy responses or we need to keep 
all code
 that is specific to either requests or responses out of HTTP_IN. I am leaning in the
 direction of creating a new filter, PROXIED_RESPONSE_IN or something or the other.

Most of the logic from the filter's perspective for reading requests and responses is 
the
same though.  In both cases, we must read a bunch of headers, doing folding if
appropriate, and then we read a bunch of body.  In both cases, we must be able to 
handle
chunking.

I would prefer to just create a new filter to do the limit request body stuff.

Ryan

 
 Bill
 
  This patch breaks the proxy.  Specifically, anyone who uses 
ap_proxy_make_fake_req().
 Get
  a seg fault in ap_get_limit_req_body because r-per_dir_config is NULL.  I'll 
spend some
  time on this tomorrow unless someone wants to jump on it tonight.
 
  Bill
 
  - Original Message -
  From: [EMAIL PROTECTED]
  To: [EMAIL PROTECTED]
  Sent: Wednesday, January 02, 2002 2:56 AM
  Subject: cvs commit: httpd-2.0/server core.c
 
 
   jerenkrantz02/01/01 23:56:25
  
 Modified:.CHANGES
  include  http_core.h
  modules/http http_protocol.c
  server   core.c
 Log:
 Fix LimitRequestBody directive by moving the relevant code from
 ap_*_client_block to ap_http_filter (aka HTTP_IN).  This is the
 only appropriate place for limit checking to occur (otherwise,
 chunked input is not correctly limited).
  
 Also changed the type of limit_req_body to apr_off_t to match the
 other types inside of HTTP_IN.  Also made the strtol call for
 limit_req_body a bit more robust.
  
 Revision  ChangesPath
 1.499 +4 -0  httpd-2.0/CHANGES
  
 Index: CHANGES
 ===
 RCS file: /home/cvs/httpd-2.0/CHANGES,v
 retrieving revision 1.498
 retrieving revision 1.499
 diff -u -r1.498 -r1.499
 --- CHANGES 31 Dec 2001 21:03:12 - 1.498
 +++ CHANGES 2 Jan 2002 07:56:24 - 1.499
 @@ -1,4 +1,8 @@
  Changes with Apache 2.0.30-dev
 +
 +  *) Fix LimitRequestBody directive by placing it in the HTTP
 + filter.  [Justin Erenkrantz]
 +
*) Fix mod_proxy seg fault when the proxied server returns
   an HTTP/0.9 response or a bogus status line.
   [Adam Sussman]
  
  
  
 1.58  +3 -3  httpd-2.0/include/http_core.h
  
 Index: http_core.h
 ===
 RCS file: /home/cvs/httpd-2.0/include/http_core.h,v
 retrieving revision 1.57
 retrieving revision 1.58
 diff -u -r1.57 -r1.58
 --- http_core.h 1 Jan 2002 20:36:18 - 1.57
 +++ http_core.h 2 Jan 2002 07:56:24 - 1.58
 @@ -234,9 +234,9 @@
   * Return the limit on bytes in request msg body
   * @param r The current request
   * @return the maximum number of bytes in the request msg body
 - * @deffunc unsigned long ap_get_limit_req_body(const request_rec *r)
 + * @deffunc apr_off_t ap_get_limit_req_body(const request_rec *r)
   */
 -AP_DECLARE(unsigned long) ap_get_limit_req_body(const request_rec *r);
 +AP_DECLARE(apr_off_t) ap_get_limit_req_body(const request_rec *r);
  
  /**
   * Return the limit on bytes in XML request msg body
 @@ -471,7 +471,7 @@
  #ifdef RLIMIT_NPROC
  struct rlimit *limit_nproc;
  #endif
 -unsigned long limit_req_body;  /* limit on bytes in request msg body */
 +apr_off_t limit_req_body;  /* limit on bytes in request msg body */
  long limit_xml_body;   /* limit on bytes in XML request msg body 
*/
  
  /* logging options */
  
  
  
 1.383 +33 -11httpd-2.0/modules/http/http_protocol.c
  
 Index: http_protocol.c
 ===
 RCS file: /home/cvs/httpd-2.0/modules/http/http_protocol.c,v
 retrieving revision 1.382
 retrieving revision 1.383
 diff -u -r1.382 -r1.383
 --- http_protocol.c 6 Dec 2001 02:57:19 - 1.382
 +++ http_protocol.c 2 Jan 2002 07:56:24 - 1.383
 @@ -510,6 +510,8 @@
  
  typedef struct http_filter_ctx {
  apr_off_t remaining;
 +apr_off_t limit;
 +apr_off_t limit_used;
  enum {
  BODY_NONE,
  BODY_LENGTH,
 @@ -536,6 +538,9 @@
  const char *tenc, *lenp;
  f-ctx = ctx = apr_palloc(f-r-pool, sizeof(*ctx));
  ctx-state = BODY_NONE;
 +

Re: cvs commit: httpd-2.0/server core.c

2002-01-03 Thread William A. Rowe, Jr.

From: Ryan Bloom [EMAIL PROTECTED]
Sent: Thursday, January 03, 2002 10:14 AM


 On Thursday 03 January 2002 05:16 am, Bill Stoddard wrote:
  
Is it valid for r-per_dir_config to be null?  Hmm.  I wonder if
ap_get_limit_req_body should be fixed to handle this case instead
of ap_http_filter?  -- justin
  
   No.  It's entirely invalid.
  
   At the very least - you are looking the r-server-lookup_defaults, plus the
   Location  sections in per_dir_config.
  
   That's always true, anything that changes that assumption is broken.  Now if
   either proxy or your patch skips the initial Location  lookup (or it is
   otherwise circumvented) then you get what you pay for.
  
  It's not that clear to me what the right solution should be. Checkout
  ap_proxy_http_process_response(). This function reads the -response- from the 
proxied
  server and dummies up a request_rec to do so. So is this a valid approach or not? 
If it
  is, then we do not need to do location/directory walks (and it is fine if
  r-per_dir_config is NULL.
 
 We must be able to dummy up request_rec structures in order to use filters
 that aren't attached to a request.  I believe that r-per_dir_config should be
 allowed to be NULL.

Now I see... and don't think this is the solution.  Think for a moment; any
corrupted module could destroy r-per_dir_config, and we would be none the
wiser.

I think the simplest solution is to fill in r-per_dir_config with 
r-server-lookup_defaults.  The longer solution is to create a default
conf_vector, of an empty configuration.  And the best solution, some point
in the future, might be configuring RemoteProxy  sections.  The real question
is what is the request actually using within this dummy configuration, and that
would require some single stepping I don't have time for this week.

Bill







mod_perl 2.0 on Apache 2

2002-01-03 Thread Sebastian Bergmann

  Dunno if this is the right place to ask, but anyhow: Where can I find
  information on building / installing mod_perl 2.0 on Windows? Is this
  already possible?

  Greetings,
Sebastian

-- 
  Sebastian Bergmann
  http://sebastian-bergmann.de/ http://phpOpenTracker.de/

  Did I help you? Consider a gift: http://wishlist.sebastian-bergmann.de/



Re: cvs commit: httpd-2.0 STATUS

2002-01-03 Thread Justin Erenkrantz

On Thu, Jan 03, 2002 at 06:03:00AM -0800, Aaron Bannert wrote:
 Good theory, but in my trace we were only looking at the PID of the parent
 process, where there aren't any threads (* technically there is only
 1 thread). It is almost certainly a bug somewhere, since consuming CPU
 without bounds while performing a non-CPU-intensive task is unexpected
 behavior. In our case we run waitpid() followed by select() with a
 timeout of 1 second (to emulate a sleep()).

If I do a debug of libc_r, I see a lot of calls to select() that
are initiated by the userland scheduler.  So, we can not trust any
select() calls in the ktrace output as they will be part of the 
libc_r internals.  This happens regardless if you have threads or not
for all processes that link against libc_r.

 My guess is we're using a blocking call somewhere in worker that is
 not posting an event that the select()-based scheduler can use to do a
 context switch on.
 
 apr_thread_yield() anyone?

That doesn't seem to be the case here.  The kernel scheduler isn't 
detecting the event and it definitely has code in their to *try* 
to detect it.  My vote is to try and fix the scheduler.  -- justin




Re: cvs commit: httpd-2.0/server core.c

2002-01-03 Thread Bill Stoddard


 From: Ryan Bloom [EMAIL PROTECTED]
 Sent: Thursday, January 03, 2002 10:14 AM


  On Thursday 03 January 2002 05:16 am, Bill Stoddard wrote:
   
 Is it valid for r-per_dir_config to be null?  Hmm.  I wonder if
 ap_get_limit_req_body should be fixed to handle this case instead
 of ap_http_filter?  -- justin
   
No.  It's entirely invalid.
   
At the very least - you are looking the r-server-lookup_defaults, plus the
Location  sections in per_dir_config.
   
That's always true, anything that changes that assumption is broken.  Now if
either proxy or your patch skips the initial Location  lookup (or it is
otherwise circumvented) then you get what you pay for.
  
   It's not that clear to me what the right solution should be. Checkout
   ap_proxy_http_process_response(). This function reads the -response- from the
proxied
   server and dummies up a request_rec to do so. So is this a valid approach or 
not? If
it
   is, then we do not need to do location/directory walks (and it is fine if
   r-per_dir_config is NULL.
 
  We must be able to dummy up request_rec structures in order to use filters
  that aren't attached to a request.  I believe that r-per_dir_config should be
  allowed to be NULL.

 Now I see... and don't think this is the solution.  Think for a moment; any
 corrupted module could destroy r-per_dir_config, and we would be none the
 wiser.

I agree.

 I think the simplest solution is to fill in r-per_dir_config with
 r-server-lookup_defaults.
That will not work. Keep in mind we are reading a proxied -response-. It definitely
wouldn't be nice to put a limit of say 8192 on a response from a proxied server :-)

Now it may make sense to introduce a new config directive to explicitly place a limit 
on
the size of proxied responses. Yea, its possible but that doesn't mean it is useful or 
is
the right thing to do. The check would go into HTTP_IN (or a filter designed 
specifically
to do these types of checks).

 The longer solution is to create a default
 conf_vector, of an empty configuration.  And the best solution, some point
 in the future, might be configuring RemoteProxy  sections.
I'd prefer to not do this unless there is a compelling end user requirement to do so. 
Sure
we can make up scenarios where this would be useful, but we live in the real world :-)

 The real question
 is what is the request actually using within this dummy configuration, and that
 would require some single stepping I don't have time for this week.

I've spent some time on this and this is one reason I am sort of interested in a proxy
specific input filter.  I agree with Ryan that 99.9% would be identical to what is 
already
in HTTP_IN now.

Bill




Re: cvs commit: httpd-2.0/server core.c

2002-01-03 Thread Justin Erenkrantz

On Thu, Jan 03, 2002 at 02:31:49PM -0500, Bill Stoddard wrote:
 I've spent some time on this and this is one reason I am sort of interested in a 
proxy
 specific input filter.  I agree with Ryan that 99.9% would be identical to what is 
already
 in HTTP_IN now.

Then, why have a separate input filter just for proxy?  IIRC, proxy
could use some work to play nicer with filters, but I don't think
we need to duplicate the code.  My $.02.  Either we do checks like
you added or split it out (which I don't think is worth doing just
for limits).  -- justin




Re: cvs commit: httpd-2.0/server core.c

2002-01-03 Thread Bill Stoddard


 On Thu, Jan 03, 2002 at 02:31:49PM -0500, Bill Stoddard wrote:
  I've spent some time on this and this is one reason I am sort of interested in a 
proxy
  specific input filter.  I agree with Ryan that 99.9% would be identical to what is
already
  in HTTP_IN now.

 Then, why have a separate input filter just for proxy?  IIRC, proxy
 could use some work to play nicer with filters, but I don't think
 we need to duplicate the code.  My $.02.  Either we do checks like
 you added or split it out (which I don't think is worth doing just
 for limits).  -- justin


I voted with a commit :-) I agree with you on all points. Just a little nervous about
other filters being inserted into the input stream that frob stuff in the request_rec 
that
is not initialized by ap_proxy_make_fake_req(). In practice, it is probably not a 
problem
though.

Bill




Re: 2.0.30-dev load spiking [was: upgrade to FreeBSD 4.5-PRERELEASE]

2002-01-03 Thread Greg Ames

Aaron Bannert wrote:
 
 On Thu, Jan 03, 2002 at 11:56:26AM -0500, Greg Ames wrote:
  I do see some weirdness in 2.0.30 with www.apache.org/dyn/closer.cgi -
  it looks like we're doing one byte reads from the pipe to the cgi.  I
  don't know yet if 2_0_28 does the same.  

2_0_28 does the same, so that's not it.  (sigh)  But it sure looks
broken.  I put new trusses of just running that cgi on both levels in
http://www.apache.org/~gregames

 I don't think that would spike the run queue more than 1, no?

I'm thinking that we've got something that burns CPU like crazy for a
while, then stops.  If in the mean time normal I/O is completing, the
kernel would be adding things to the run queue at a normal rate. 
Process just wouldn't leave the run queue as quickly as they normally
do.

Greg



Re: mod_perl 2.0 on Apache 2

2002-01-03 Thread Stipe Tolj

   Dunno if this is the right place to ask, but anyhow: Where can I find
   information on building / installing mod_perl 2.0 on Windows? Is this
   already possible?

if it's currently impossible for native Win32 and possible for Unix
flavors, I guess we may get it work under Cygwin for Win32. My 2ct.

Stipe

[EMAIL PROTECTED]
---
Wapme Systems AG

Münsterstr. 248
40470 Düsseldorf

Tel: +49-211-74845-0
Fax: +49-211-74845-299

E-Mail: [EMAIL PROTECTED]
Internet: http://www.wapme-systems.de
---
wapme.net - wherever you are



Re: mod_perl 2.0 on Apache 2

2002-01-03 Thread William A. Rowe, Jr.

Dunno if this is the right place to ask, but anyhow: Where can I find
information on building / installing mod_perl 2.0 on Windows? Is this
already possible?
 
 if it's currently impossible for native Win32 and possible for Unix
 flavors, I guess we may get it work under Cygwin for Win32. My 2ct.

Huh?  Of course it works [great job Doug :-]

You first need to set up your environment for command line builds (VCVARS32
and SETENV if you have brought your PSDK up to date) and this;

perl makefile.pl MP_USE_DSO=1 MP_GENERATE_XS=1 MP_AP_PREFIX=c:\apache2

Then nmake.

Pretty simple.

Bill





Re: mod_perl 2.0 on Apache 2

2002-01-03 Thread Sebastian Bergmann

William A. Rowe, Jr. wrote:
 Huh?  Of course it works [great job Doug :-]

  Good to know.

 perl makefile.pl MP_USE_DSO=1 MP_GENERATE_XS=1 MP_AP_PREFIX=c:\apache2
 Then nmake.

perl makefile.pl MP_USE_DSO=1 MP_GENERATE_XS=1 
MP_AP_PREFIX=c:\server\apache
No such signal: SIGUSR1 at Apache-Test/lib/Apache/TestSmoke.pm line 16.
Compilation failed in require at Apache-Test/lib/Apache/TestSmokePerl.pm
line 7.

BEGIN failed--compilation aborted at
Apache-Test/lib/Apache/TestSmokePerl.pm line 7.
Compilation failed in require at makefile.pl line 13.
BEGIN failed--compilation aborted at makefile.pl line 13.

 You first need to set up your environment for command line builds 
 (VCVARS32 and SETENV if you have brought your PSDK up to date) and 
 this;

  I guess I have to do this now :-)

-- 
  Sebastian Bergmann
  http://sebastian-bergmann.de/ http://phpOpenTracker.de/

  Did I help you? Consider a gift: http://wishlist.sebastian-bergmann.de/



Re: mod_perl 2.0 on Apache 2

2002-01-03 Thread William A. Rowe, Jr.

From: Sebastian Bergmann [EMAIL PROTECTED]
Sent: Thursday, January 03, 2002 4:42 PM


 William A. Rowe, Jr. wrote:
 
  perl makefile.pl MP_USE_DSO=1 MP_GENERATE_XS=1 MP_AP_PREFIX=c:\apache2
  Then nmake.
 
 perl makefile.pl MP_USE_DSO=1 MP_GENERATE_XS=1 
 MP_AP_PREFIX=c:\server\apache
 No such signal: SIGUSR1 at Apache-Test/lib/Apache/TestSmoke.pm line 16.
 Compilation failed in require at Apache-Test/lib/Apache/TestSmokePerl.pm
 line 7.

I've seen that bug - hadn't created a patch yet, but essentially commented
out line 16 with an if ($^O ne MSWin32).

 BEGIN failed--compilation aborted at
 Apache-Test/lib/Apache/TestSmokePerl.pm line 7.
 Compilation failed in require at makefile.pl line 13.
 BEGIN failed--compilation aborted at makefile.pl line 13.
 
  You first need to set up your environment for command line builds 
  (VCVARS32 and SETENV if you have brought your PSDK up to date) and 
  this;
 
   I guess I have to do this now :-)

I don't think so, it's just platform specific code cruft that landed in
TestSmokePerl.pm.  Once I commented it out, everything seems to build/run
just fine.  Althought SSL is a bear in and of itself.

Please take this to the modperl or [EMAIL PROTECTED] lists, it really 
doesn't belong on this list.

Bill





Re: mod_perl 2.0 on Apache 2

2002-01-03 Thread Stas Bekman

Sebastian Bergmann wrote:

   Dunno if this is the right place to ask, but anyhow: Where can I find
   information on building / installing mod_perl 2.0 on Windows? Is this
   already possible?

Randy Kobes has released a win32 binary, see:
http://mathforum.org/epigone/modperl/brilharnal

Please subscribe to [EMAIL PROTECTED] for seeing what's 
cooking on mod_perl 2.0 front. You will find the necessary documentation 
in modperl-2.0 cvs repository.

_
Stas Bekman JAm_pH  --   Just Another mod_perl Hacker
http://stason.org/  mod_perl Guide   http://perl.apache.org/guide
mailto:[EMAIL PROTECTED]  http://ticketmaster.com http://apacheweek.com
http://singlesheaven.com http://perl.apache.org http://perlmonth.com/