httpd-test/perl-framework STATUS: -*-text-*-
Last modified at [$Date: 2002/03/09 05:22:48 $]
Stuff to do:
* finish the t/TEST exit code issue (ORed with 0x2C if
framework failed)
* change existing tests that frob the DocumentRoot (e.g.,
(sorry if this pops up twice - I've been having subscription problems due to
a change in my outgoing email address)
hi all...
just recently the Makefile.PL I've been using as a template for
Apache::Test started failing under bleedperl.
here's the error:
Can't use string (Apache::TestMM) as a
Geoffrey Young wrote:
(sorry if this pops up twice - I've been having subscription problems due to
a change in my outgoing email address)
I did adjust the import method in TestMM recently (because it didn't
work across many Makefiles), but why do you call -clean? You want to
import this method
-Original Message-
From: Stas Bekman [mailto:[EMAIL PROTECTED]
Sent: Thursday, April 11, 2002 2:02 PM
To: [EMAIL PROTECTED]
Subject: Re: MM_Unix changes in bleedperl
Geoffrey Young wrote:
(sorry if this pops up twice - I've been having
subscription problems due to
a
hi all...
just recently the Makefile.PL I've been using as a template for
Apache::Test started failing under bleedperl.
here's the error:
Can't use string (Apache::TestMM) as a HASH ref while strict refs in use
at /src/bleedperl/lib/5.7.3/ExtUtils/MM_Unix.pm line 352.
my Makefile.PL is
Jeff,
remember this one ?
Found my answer to what is going on, have a workaround but not a fix.
Thought you might be interested in what I found.
When I build, I get (and want) IPv6 support, but when I run the image on a
IPv4 only machine, everything seems to work but the POD. Turns out that
David Hill [EMAIL PROTECTED] writes:
Jeff,
remember this one ?
of course :) (I also remember your other problem where I suggested
running buildconf to pick up local libtool... what happened with
that?)
When I build, I get (and want) IPv6 support, but when I run the image on a
IPv4 only
On Thu, 11 Apr 2002, Dwayne Miller wrote:
Did order, allow, and deny configuration options go away on the
Directory directive? This is in the 2.0 docs, but I can't get it to
work. I do have a lot of the modules unloaded however.
| Directory /
Order Deny,Allow
Deny from All
Hi,
From experience on the samba/samba-tng lists I have
to discourage answering user questions here. Only
point them at the user list(s). The reason for this
is simple: they keep comming back if you give an
answer. I know this sounds (a bit?) rude, but,
well, fill in the ... ;)
Thanks,
Jeff Trawick [EMAIL PROTECTED] wrote:
Roy T. Fielding [EMAIL PROTECTED] writes:
See also http://fink.sourceforge.net/doc/porting/libtool.php
Thanks for the link to fink. It looks useful in general, beyond the
libtool information.
I didn't have any trouble* getting either Sander's or
Jeff,
of course :) (I also remember your other problem where I suggested
running buildconf to pick up local libtool... what happened with
that?)
Just got around to trying that (long queue, what with a brand new Apache 2
to play with :-). Yes, the latest libtool installed 1.4.2 and the
Rose, Billy [EMAIL PROTECTED] writes:
Would the solution in my last email do what you are looking for?
Perhaps... But I'm the moron that is hesitant regarding drastic
changes here so I'm probably not the right person to sell your design
to right now :)
--
Jeff Trawick | [EMAIL PROTECTED]
But this bites
1) when there is just one child process (wasted syscall)
2) because it would normally go faster if the listener could stay
just
ahead of the workers so that workers can dequeue new work when
they
finish with the old without having to wait on the listener
Currently, if you set the LimitRequestBody and then do a POST request
against a CGI resource with a body larger than the LimitRequestBody,
one of two broken things happen:
1) You get a closed socket and no returned data.
2) you get the normal data from the cgi script, but no 413 error.
It
David Hill [EMAIL PROTECTED] writes:
Jeff,
of course :) (I also remember your other problem where I suggested
running buildconf to pick up local libtool... what happened with
that?)
Just got around to trying that (long queue, what with a brand new Apache 2
to play with :-). Yes, the
On Thursday, April 11, 2002, at 02:41 PM, Pier Fumagalli wrote:
Jeff Trawick [EMAIL PROTECTED] wrote:
The only thing I complain about now is versioning (done by libtool,
it's all
so screwed)...
*other than wasting a fair amount of time because APR configure bombs
when I use
I hope my emails are not annoying you guys. To give a more complete picture
of this (pulled from methods I used in a client server app):
The initial process creates a shared memory area for the queue and then a
new thread, or child process, whose sole purpose is to dispatch connections
to
I just added a Tru64 note at
http://www.apache.org/dist/httpd/
(click on the Check here to see link right under the links to the
tarballs)
Cool, thanks.
A lookup of :: is supposed to get you in6addr_any :)... I'll have to
look at this further later... Thanks for showing the jist
David Hill [EMAIL PROTECTED] writes:
A lookup of :: is supposed to get you in6addr_any :)... I'll have to
look at this further later... Thanks for showing the jist results...
Sounds like what I was telling them :-) And I added that in6addr_any should
work regardless of v4 or v6
On Thu, Apr 11, 2002 at 03:04:27PM -0400, Bill Stoddard wrote:
I am not an expert on the worker MPM but I don't think that is an accurate
statement
of
the problem. The accept thread uses ap_queue_push() to enqueue a socket for the
worker
threads. ap_queue_push() will block if the queue
Now I know I'm missing something here, so maybe you can fill in the
blanks for me. This doesn't seem like a problem that would hang the
entire server or put a hard limit on the number of concurrent connections
(across processes). I would expect a finishing worker thread to return
to the queue
Chuck Murcko [EMAIL PROTECTED] wrote:
On Thursday, April 11, 2002, at 02:41 PM, Pier Fumagalli wrote:
Jeff Trawick [EMAIL PROTECTED] wrote:
The only thing I complain about now is versioning (done by libtool,
it's all
so screwed)...
*other than wasting a fair amount of time because
Jeff Trawick [EMAIL PROTECTED] wrote:
Pier Fumagalli [EMAIL PROTECTED] writes:
Jeff Trawick [EMAIL PROTECTED] wrote:
Pier's solution results in a cleaner build (a bunch of bogus
basename invocations went away). Neither version of libtool gets the
library path into httpd for some reason
Aaron Bannert wrote:
On Thu, Apr 11, 2002 at 02:09:27PM -0700, Brian Pane wrote:
The problem isn't that the busy worker threads will never become unbusy
and pick up new work from the queue. If the queue is full, and the listener
is blocked, the listener will (with the current code) be properly
Brian Pane wrote:
While flood definitely has more concurrency, I don't think the
performance
problem in this test case was ab's fault. From what I was able to
observe
It's also worth noting that the performance was fine when
I used ab with prefork and leader/follower; only with worker
Ok, now we're on the same page. I see this as a problem as well, but I
don't think this is what is causing the problem described earlier in this
thread. Considering how unlikely it is that all of the threads on one
process are on long-lived connections, I don't see this as a critical
I spoke too soon on the libtool thingie the box I tried it on had some
other stuff on it. Also, on this particular box, for some reason it did not
find threads (specifically pthread.h according to apr/config.log).
On a cleaner box, I found I had to install GNU m4, then GNU autoconf as well
On Thu, Apr 11, 2002 at 03:27:23PM -0700, Roy T. Fielding wrote:
Ok, now we're on the same page. I see this as a problem as well, but I
don't think this is what is causing the problem described earlier in this
thread. Considering how unlikely it is that all of the threads on one
process are
On Thu, Apr 11, 2002 at 04:27:08PM -0700, Justin Erenkrantz wrote:
No. The limit needs to apply to *all* bucket reads not just the
ap_get_client_block which we shouldn't even be supporting
(it's old cruft from 1.3). This patch is broken as inputs
will not be limited if you don't use
On Thu, 11 Apr 2002, Ryan Bloom wrote:
And you can always play some games with the counters to enable you
to accept a few additional connections (however you define 'few')
in order to keep some work in the queue.
It just is hard to think about what few should be given that there
Cliff Woolley wrote:
On Thu, 11 Apr 2002, Aaron Bannert wrote:
Under typical conditions, long-running and short-running requests will
be distributed throughout the children. In order for this scenario to
occur, all M threads in a child would have to be in use by a long-lived
connection.
On Thu, Apr 11, 2002 at 04:57:23PM -0700, Brian Pane wrote:
On the contrary, production servers sometimes have *huge* discontinuities
in the number of concurrent connections. Think about what happens to the
connection rate at an online brokerage every day at the instant when the
stock
On Thu, Apr 11, 2002 at 04:34:08PM -0700, Aaron Bannert wrote:
I guess I'm unclear why it is CGI's responsibility to watch for this.
Do we then need to put these kinds of checks in every http-body-using
element? (My relative newness to the filters is showing.)
What happens in CGI if it gets
Justin Erenkrantz wrote:
Can't we just add in the extra mutex check to worker and move on?
(i.e. don't call accept() when the worker queue is empty).
+1
Adding the mutex check will fix one of the three problems in worker that
I know of. The other two--the large-grained overhead of forking a
didn't get a response on the docs list. maybe you guys know.
thanx.
barbee.
---BeginMessage---
hi,
in httpd-docs-2.0/manual/mod there are xml files for each apache module's
directives. i couldn't find the equivalent in httpd-docs-1.3, just the html
files. are there xml files for the apache
Aaron Bannert wrote:
Back then we had the POD and at least one listener, which essentially
caused us to never use S_L_U_A, meaning we always had an accept mutex.
Now that that's been corrected, the problem with the we can accept
more connections than we can immediately process is showing up
C-L invalid when using SSI DirectoryIndex:
http://nagoya.apache.org/bugzilla/show_bug.cgi?id=7966
We've got three people on Bugzilla who are seeing this bug. Is
anyone available to look at this? This seems related to the
issues that were being addressed right as we shipped. I wonder
if this
On Thu, Apr 11, 2002 at 07:02:48PM -0700, Ryan Bloom wrote:
Dollars to Donuts, the problem is that the C-L filter isn't removing the
C-L header from the request. I won't have time to look at this for a
few weeks though.
The request? Don't you mean the file?
Taking a quick look at
From: Justin Erenkrantz [mailto:[EMAIL PROTECTED]]
On Thu, Apr 11, 2002 at 07:02:48PM -0700, Ryan Bloom wrote:
Dollars to Donuts, the problem is that the C-L filter isn't removing
the
C-L header from the request. I won't have time to look at this for
a
few weeks though.
The request?
I've created a new struct fd_queue_info_t that I believe can be used to
monitor the number of idle workers and make sure that we don't accept
more connections than we have available workers. I think by implementing
this we can also clear out some of the extra cruft in the queue, only
relying on a
I'm having trouble with the SSLProxy stuff as of a day or two ago. With
ssl/proxy.t in httpd-test running under Linux 2.4 with prefork, all of
the tests are getting 403 response codes, eg:
#lwp request:
#POST http://localhost:8536/eat_post HTTP/1.0
#User-Agent: libwww-perl/5.60
41 matches
Mail list logo