Re: Apache and CLOSE_WAIT state

2008-09-22 Thread Richard Hubbell
--- On Wed, 9/3/08, Arnab Ganguly <[EMAIL PROTECTED]> wrote:

> From: Arnab Ganguly <[EMAIL PROTECTED]>
> Subject: Apache and CLOSE_WAIT state
> To: dev@httpd.apache.org
> Date: Wednesday, September 3, 2008, 12:44 AM
> Hi All,
> My Apache module hangs when I do a lsof -i:listening port
> output gives lots
> of CLOSE_WAIT.
> Initially the state comes out as ESTABLISHED but as the
> CLOSE_WAIT grows my
> server hangs.
> What would be procedure in order to prevent this.
> Apache Webserver version is 2.2.8 with MPM=worker and
> OS=Red-Hat Release 3.0
> Any help would be very much appreciated.
> Thanks and regards
> Arnab

There does seem to be close_wait issues. I've recently done
export CFLAGS=-DNO_LINGCLOSE before building so httpd won't use lingering_close.

There's also bug 36636 that may or may not be relevant to the issue.



  


Re: make test failures on solaris5.10

2008-09-22 Thread Richard Hubbell
--- On Mon, 9/22/08, Paul Querna <[EMAIL PROTECTED]> wrote:

> From: Paul Querna <[EMAIL PROTECTED]>
> Subject: Re: make test failures on solaris5.10
> To: dev@httpd.apache.org
> Date: Monday, September 22, 2008, 3:46 PM
> Richard Hubbell wrote:
> > I ran "make test"
> > 
> > Safe to ignore? Bugs? Something mis-configured? 
> > 
> > testflock: \ld.so.1: tryread: fatal:
> libgcc_s.so.1: open failed: No such file or directory
> -ld.so.1: tryread: fatal: libgcc_s.so.1: open failed: No
> such file or directory
> > FAILED 2 of 3
> > 
> > testoc: -ld.so.1: occhild: fatal: libgcc_s.so.1: open
> failed: No such file or directory SUCCESS
> > 
> > testpipe :|/bin/bash: line 1: 19228 Broken Pipe
> ./$prog
> 
> I believe these are the tests for APR?

Yes.

> 
> [EMAIL PROTECTED] is likely a better place.

Didn't know there was a separate list. But everything has it's own list.
apr comes in the httpd tar ball

> 
> But, it appears perhaps you need to hack with your LD Path
> since you 
> compiled with GCC on solaris 10, which can cause pain.
> as you seem 
> to have found :-)

Yes, turns out I needed to run crle -u -l /usr/local/lib
(Thanks to Peter for that!)

Thanks.

> 
> If you hack the ld path to find libgcc_s, and still get
> errors, it would 
> be good to bring them up on the [EMAIL PROTECTED] list.
> 
> Thanks,
> Paul


  


Re: make test failures on solaris5.10

2008-09-22 Thread Paul Querna

Richard Hubbell wrote:

I ran "make test"

Safe to ignore? Bugs? Something mis-configured? 


testflock: \ld.so.1: tryread: fatal: libgcc_s.so.1: open failed: No such file 
or directory -ld.so.1: tryread: fatal: libgcc_s.so.1: open failed: No such file 
or directory
FAILED 2 of 3

testoc: -ld.so.1: occhild: fatal: libgcc_s.so.1: open failed: No such file or 
directory SUCCESS

testpipe :|/bin/bash: line 1: 19228 Broken Pipe ./$prog


I believe these are the tests for APR?

[EMAIL PROTECTED] is likely a better place.

But, it appears perhaps you need to hack with your LD Path since you 
compiled with GCC on solaris 10, which can cause pain. as you seem 
to have found :-)


If you hack the ld path to find libgcc_s, and still get errors, it would 
be good to bring them up on the [EMAIL PROTECTED] list.


Thanks,
Paul


make test failures on solaris5.10

2008-09-22 Thread Richard Hubbell
I ran "make test"

Safe to ignore? Bugs? Something mis-configured? 

testflock: \ld.so.1: tryread: fatal: libgcc_s.so.1: open failed: No such file 
or directory -ld.so.1: tryread: fatal: libgcc_s.so.1: open failed: No such file 
or directory
FAILED 2 of 3

testoc: -ld.so.1: occhild: fatal: libgcc_s.so.1: open failed: No such file or 
directory SUCCESS

testpipe :|/bin/bash: line 1: 19228 Broken Pipe ./$prog




  


Re: mod_proxy race condition bug #37770

2008-09-22 Thread Adam Woodworth
Sounds good, glad I was able to help.  I'll keep an eye on this in a
future httpd release.


On Sat, Sep 20, 2008 at 6:42 AM, Ruediger Pluem <[EMAIL PROTECTED]> wrote:
>
>
> On 09/20/2008 12:21 PM, Nick Kew wrote:
>>
>> On 19 Sep 2008, at 23:08, Ruediger Pluem wrote:
>>
>>> On 09/19/2008 11:24 PM, Adam Woodworth wrote:

 The problem with this is that it will also do this for connections to
 the backend that timed out.  For our application, we want actual
 timeouts to be caught as normal and kicked back through to the client
 as a 50x error.
>>>
>>> Hm. Good point. I have to think about it, because a timeout error can
>>> justify  a repeated request (like when a firewall between frontend and
>>> backend cut the connection and now the packets are simply dropped by
>>> the firewall and a fresh one would fix this).
>>
>> I think Adam is right.  The justification for the RFC-breaking connection
>> drop is IIRC that it's merely propagating the same from the backend
>> to the client.  That doesn't apply in the case of a timeout.
>>
>
> Yes, I came to the same conclusion in the meantime. The firewall drop can be
> prevented by setting keepalive to on. Furthermore the typical keepalive
> timeouts
> for http connections are way below the typical firewall timeouts. So if we
> get back APR_TIMEUP we can safely assume that the backend did not respond in
> a timely manner. Looking at his patch I think it looks basically fine.
> Hope to find some time for a closer look and a commit later on.
>
> Regards
>
> RĂ¼diger
>


Re: [flood] Critical section for flood_report_relative_times.c

2008-09-22 Thread Justin Erenkrantz
On Sat, Sep 20, 2008 at 11:40 PM, Ohad Lutzky <[EMAIL PROTECTED]> wrote:
>> *SNIP* patch *SNIP*
>
> The patch seems to work well, thanks! :)

Cool - committed in r697920.  Thanks.  -- justin


AuthzMergeRules blocks everything in default configuration

2008-09-22 Thread Dan Poirier

I hate to re-open this can of worms, but...

Unless I'm missing something, in trunk right now, uncommenting includes 
for the examples like "extra/httpd-manual.conf" does not result in being 
able to serve the documentation pages.


In the main config file:


 Require all denied


blocks all access, and that's inherited by every other  or
 in the configuration, since AuthzMergeRules defaults to On.

To make this work, one would have to add AuthzMergeRules Off to every 
other  or  in the configuration that isn't a subset 
of another one

that already has it.

Doing that makes me wonder what's the point of having it, if we have to 
turn it

off in almost every case to actually serve pages.

Or would it make sense to add AuthzMergeRules Off to ?  
Would that
make the rest of the permissions kind of stand alone?  I guess then 
you'd have

to add AuthzMergeRules On to any of them whose permissions you wanted
inherited by even lower level sections.

I read through some previous discussion of the authz inheritance 
behavior, but
it doesn't seem to have considered the effect of having "Require all 
denied" at
the top level, which is overriding everything else by default even when 
other

sections specify other permissions.

Dan





Re: Future direction of MPMs, was Re: svn commit: r697357 - in /httpd/httpd/trunk: include/ modules/http/ modules/test/ server/ server/mpm/experimental/event/

2008-09-22 Thread Jim Jagielski


On Sep 22, 2008, at 11:51 AM, Paul Querna wrote:


No, in pure requests/second, there will not be a significant  
difference.


Today, a properly tuned apache httpd, with enough RAM, can keep up  
with the 'fastest' web servers of the day, like lighttpd.  Most of  
the benchmarks where we do badly, is when apache httpd is mis- 
configured, or running on extremely low RAM resources.


I think what we solve, is that with a slightly more async core and  
MPM structure, we can signfigantly reduce the memory required to  
service several thousand long lived connections.




Agreed. We're not talking, imo, about increasing performance. We're
talking increasing efficiency.

Moving forward with a hybrid that lets you pull in async abilities  
when needed seems reasonable to me.




++1...



Re: Future direction of MPMs, was Re: svn commit: r697357 - in /httpd/httpd/trunk: include/ modules/http/ modules/test/ server/ server/mpm/experimental/event/

2008-09-22 Thread Paul Querna

Akins, Brian wrote:

On 9/21/08 2:17 AM, "Bing Swen" <[EMAIL PROTECTED]> wrote:


But an optimal
network i/o model needs a layer that maps a *request* to a thread, so that a
worker thread (or process) will not have to be tied up entirely with a
single connection during the whole life time of the connection. Instead, a
worker can be scheduled to handle different connections, which helps both
reducing the number of workers and the performance of request handling
(especially on slow connections).


I still want to see this backed up with real world experience.  I know I
keep repeating myself, but in the real world, we have never seen the
supposed inherent performance problems in the worker model (1 connection = 1
thread).


At $work, we just upgraded RAM in what is essentially web server 
machines, just because we are running worker MPM and expect lots of long 
lived connections.  It has a cost, and it isn't free.



It sounds great to theorize about the wonders of a completely event driven
or asynchronous model. However, it seems that this only nets real measurable
performance gains is very simplistic benchmarks.



What I view happening in the event MPM today, and where I would like to 
go in 2.4, isn't a fully 'asynchronous model'.


It is much more of a hybrid, using threads (and processes) when running 
most code, but allowing requests to be moved to an event queue, waiting 
for IO, or a timer.



I'm all for making httpd faster, scale better, etc.  I just don't want to be
extremely disappointed if we rewrite it all and gain nothing but a more
complicated model.  If we get great gains, wonderful, but I'd like to see
some actually numbers before we all decided to rework the core.


No, in pure requests/second, there will not be a significant difference.

Today, a properly tuned apache httpd, with enough RAM, can keep up with 
the 'fastest' web servers of the day, like lighttpd.  Most of the 
benchmarks where we do badly, is when apache httpd is mis-configured, or 
running on extremely low RAM resources.


I think what we solve, is that with a slightly more async core and MPM 
structure, we can signfigantly reduce the memory required to service 
several thousand long lived connections.


Moving forward with a hybrid that lets you pull in async abilities when 
needed seems reasonable to me.


-Paul





Re: Future direction of MPMs, was Re: svn commit: r697357 - in /httpd/httpd/trunk: include/ modules/http/ modules/test/ server/ server/mpm/experimental/event/

2008-09-22 Thread Graham Leggett

Akins, Brian wrote:


I'm all for making httpd faster, scale better, etc.  I just don't want to be
extremely disappointed if we rewrite it all and gain nothing but a more
complicated model.  If we get great gains, wonderful, but I'd like to see
some actually numbers before we all decided to rework the core.


I think the risk of being extremely disappointed is a real risk, but I 
don't think it is a reason not to give it a try.


Perhaps a suitable compromise is to say this:

- Some people want to try to come up with a purely event driven model 
that requires changing the MPM interface as necessary. Who knows, it 
might give performance gains too!


- Some people want to keep an MPM that implements the worker model, 
because we know it works to an acceptable level.


If we can achieve both at once, that will be ideal.


(Disclaimer: yes, I'm partially playing devil's advocate here.)


Wearing the hat of someone who likes to try out new stuff, from time to 
time you hit a dead end within the design of the server that makes it 
either hard to or impossible to achieve something new.


A shake up of the core is likely to remove some of these barriers, which 
in turn means that avenues open up that up till now have been dead ends, 
which makes the development interesting again.


I think the second-from-worst case scenario is that Paul and others end 
up exploring some cool ideas and they don't work, and then the fun was 
in the exploring of the new ideas, so nothing is really lost. The best 
case scenario is obviously that some cool ideas are explored and they do 
work. The worst case scenario is that people do nothing.


Regards,
Graham
--



smime.p7s
Description: S/MIME Cryptographic Signature


Re: Future direction of MPMs, was Re: svn commit: r697357 - in /httpd/httpd/trunk: include/ modules/http/ modules/test/ server/ server/mpm/experimental/event/

2008-09-22 Thread Issac Goldstand



Akins, Brian wrote:
> On 9/21/08 2:17 AM, "Bing Swen" <[EMAIL PROTECTED]> wrote:
> 
>> But an optimal
>> network i/o model needs a layer that maps a *request* to a thread, so that a
>> worker thread (or process) will not have to be tied up entirely with a
>> single connection during the whole life time of the connection. Instead, a
>> worker can be scheduled to handle different connections, which helps both
>> reducing the number of workers and the performance of request handling
>> (especially on slow connections).
> 
> I still want to see this backed up with real world experience.  I know I
> keep repeating myself, but in the real world, we have never seen the
> supposed inherent performance problems in the worker model (1 connection = 1
> thread).
> 
> It sounds great to theorize about the wonders of a completely event driven
> or asynchronous model. However, it seems that this only nets real measurable
> performance gains is very simplistic benchmarks.
> 
> I'm all for making httpd faster, scale better, etc.  I just don't want to be
> extremely disappointed if we rewrite it all and gain nothing but a more
> complicated model.  If we get great gains, wonderful, but I'd like to see
> some actually numbers before we all decided to rework the core.
> 

Devil's advocate or not, the point is valid IMO.  We can (and likely
will) have loads of fun reworking everything, but I'm +1 with Brian here.

  Issac


Re: Future direction of MPMs, was Re: svn commit: r697357 - in /httpd/httpd/trunk: include/ modules/http/ modules/test/ server/ server/mpm/experimental/event/

2008-09-22 Thread Akins, Brian
On 9/21/08 2:17 AM, "Bing Swen" <[EMAIL PROTECTED]> wrote:

> But an optimal
> network i/o model needs a layer that maps a *request* to a thread, so that a
> worker thread (or process) will not have to be tied up entirely with a
> single connection during the whole life time of the connection. Instead, a
> worker can be scheduled to handle different connections, which helps both
> reducing the number of workers and the performance of request handling
> (especially on slow connections).

I still want to see this backed up with real world experience.  I know I
keep repeating myself, but in the real world, we have never seen the
supposed inherent performance problems in the worker model (1 connection = 1
thread).

It sounds great to theorize about the wonders of a completely event driven
or asynchronous model. However, it seems that this only nets real measurable
performance gains is very simplistic benchmarks.

I'm all for making httpd faster, scale better, etc.  I just don't want to be
extremely disappointed if we rewrite it all and gain nothing but a more
complicated model.  If we get great gains, wonderful, but I'd like to see
some actually numbers before we all decided to rework the core.


(Disclaimer: yes, I'm partially playing devil's advocate here.)


-- 
Brian Akins
Chief Operations Engineer
Turner Digital Media Technologies