Adding dev@apr
On May 22, 2014, at 6:04 AM, Yann Ylavic ylavic@gmail.com wrote:
Hello,
while working on
https://issues.apache.org/bugzilla/show_bug.cgi?id=56226 for a
possible way to use vhost's KeepAliveTimeout with mpm_event (by using
a skiplist instead of the actual keepalive
Hello,
while working on
https://issues.apache.org/bugzilla/show_bug.cgi?id=56226 for a
possible way to use vhost's KeepAliveTimeout with mpm_event (by using
a skiplist instead of the actual keepalive queue), I realized that
apr_skiplists do not accept multiple values per key (unlike apr_table
for
- I think it is good to fix this behavior, using only the global
keepalive timeout was definitely a choice I (?) made when doing it.
- Skiplists seem generally acceptable for storing this data.
Alternatively a BTree with in order iteration would be competitive..
but, meh, either will be fine.
-
Has anyone ever seen a situation where httpd (or the OS) will RST a
connection because there's too much unread data or such?
I'm doing some pipelined requests with serf against a 2.0.50 httpd on RH7.1
server (2.4.2 kernel?). I'm getting ECONNRESET on the client after I try
to read or write a
On Feb 9, 2006, at 9:36 PM, Justin Erenkrantz wrote:
Has anyone ever seen a situation where httpd (or the OS) will RST a
connection because there's too much unread data or such?
I'm doing some pipelined requests with serf against a 2.0.50 httpd
on RH7.1
server (2.4.2 kernel?). I'm getting
On Thu, Feb 09, 2006 at 10:01:04PM -0800, Roy Fielding wrote:
Keep in mind that a RST also tells the recipient to throw away any
data that it has received since the last ACK. Thus, you would never
see the server's last response unless you use an external network
monitor (like another PC
On Feb 9, 2006, at 10:17 PM, Justin Erenkrantz wrote:
On IRC, Paul pointed out this bug (now fixed):
http://issues.apache.org/bugzilla/show_bug.cgi?id=35292
2.0.50 probably has this bug - in that it'll won't do lingering close
correctly - and perhaps that's what I'm running into.
You're
Any cute ideas on how to work around this? The real problem is that
there's no way for the server to tell me what its configured
MaxKeepAliveRequests setting is. If I knew that, I could respect it -
instead I have to discover it experimentally...
That's why we used to send a Keep-Alive:
a connection: close coming from the client
* should not affect the connection to the server, it's unlikely
* that subsequent client requests will hit this thread/process,
* so we cancel server keepalive if the client does.
*/
This code was from the days when keepalives were
Does this code from 2.1 in apr_proxy_http_request still make sense? Do we
not want to attempt to maintain the server connection anyway? Maybe I'm
missing some other logic...
/* strip connection listed hop-by-hop headers from the request */
/* even though in theory a connection: close
Graham Leggett wrote:
Akins, Brian wrote:
Proxy in 2.1 now has a connection pool, and my understanding is that
this restriction has fallen away - subrequests should take advantage of
the connection pool just like normal requests can.
So, can this check be removed? I'll submit a patch when
From the best I can tell, subrequests do not get the benefits of keepalives
in mod_proxy in 2.1. What is the reason for this?
--
Brian Akins
Lead Systems Engineer
CNN Internet Technologies
Akins, Brian wrote:
From the best I can tell, subrequests do not get the benefits of keepalives
in mod_proxy in 2.1. What is the reason for this?
The original reason was that there was a one to one relationship between
a keepalive connection on the browser and a keepalive to the backend
Greg Ames wrote:
Brian Akins wrote:
We've been doing some testing with the current 2.1 implementation, and
it works, it just currently doesn't offer much advantage over worker
for us. If num keepalives == maxclients, you can't accept anymore
connections.
that's a surprise
Title: Re: Keepalives
On 6/20/05 3:14 PM, Greg Ames [EMAIL PROTECTED] wrote:
...so with this setup, I have roughly 3 connections for every worker thread,
including the idle threads.
Cool. Maybe I just need the latest version. Or I could have just screwed my
test...
Anyway
Title: Keepalives
Here's the problem:
If you want to use keepalives, all of you workers (threads/procs/whatever)
can become busy just waiting on another request on a keepalive connection.
Raising MaxClients does not help.
The Event MPM does not seems to really help this situation
At 08:11 AM 6/17/2005, Akins, Brian wrote:
If you want to use keepalives, all of you workers (threads/procs/whatever)
can become busy just waiting on another request on a keepalive connection.
Raising MaxClients does not help.
No, it doesn't :) But lowering the keepalive threshold to three
are waiting for x seconds for keepalives (even
if it is 3-5 seconds), the server cannot service any new clients. I'm
willing to take an overall resource hit (and inconvenience some
clients) to maintain the overall availability of the server.
Does that make any sense? It does to me, but I may
Akins, Brian wrote:
Short Term solution:
This is what we did. We use worker MPM. We wrote a simple modules that
keep track of how many keeapalive connections are active. When a threshold
is reached, it does not allow anymore keepalives. (Basically sets
r-connection-keepalive
Akins, Brian wrote:
Here's the problem:
If you want to use keepalives, all of you workers (threads/procs/whatever)
can become busy just waiting on another request on a keepalive connection.
Raising MaxClients does not help.
The Event MPM does not seems to really help this situation. It seems
, HTML and other content come
from separate server pools. But most pages are made up of a few HTML
pages. (You have to look at the HTML source to see what I mean).
Also, we have some app servers that often have all connections tied up
in keepalive because the front ends open tons of keepalives (I
are waiting for x seconds for keepalives (even if it
is 3-5 seconds), the server cannot service any new clients. I'm willing to
take an overall resource hit (and inconvenience some clients) to maintain
the overall availability of the server.
Does that make any sense? It does to me, but I may
. Earliest I could see this happening is
in the v 2.4 timeframe.
We've been doing some testing with the current 2.1 implementation, and
it works, it just currently doesn't offer much advantage over worker for
us. If num keepalives == maxclients, you can't accept anymore
connections. I want
William A. Rowe, Jr. wrote:
Yes it makes sense. But I'd encourage you to consider dropping that
keepalive time and see if the problem isn't significantly mitigated.
It is mitigated somewhat, but we still hit maxclients without our hack
in place.
Right now, it does take cycles to walk the
. Snipping all the other issues, which are largely valid and do
contain some good ideas
Akins, Brian wrote:
Here's the problem:
If you want to use keepalives, all of you workers
(threads/procs/whatever)
can become busy just waiting on another request on a keepalive
connection
At 10:12 AM 6/17/2005, Brian Akins wrote:
Adding an indexed list of 'counts' would be
very lightweight, and one atomic increment and decrement per state
change. This would probably be more efficient than walking the
entire list.
Sounds good. Of course, when changing from on state to another you
has a long ways to go. Earliest I could see this happening is
in the v 2.4 timeframe.
We've been doing some testing with the current 2.1 implementation, and
it works, it just currently doesn't offer much advantage over worker for
us. If num keepalives == maxclients, you can't accept anymore
Any interest/objections to added another MPM query
AP_MPMQ_IDLE_WORKERS
(or some other name)
in worker.c, could just add this to ap_mpm_query:
case AP_MPMQ_IDLE_WORKERS:
*result = ap_idle_thread_count;
return APR_SUCCESS;
and in perform_idle_server_maintenance we
currently doesn't offer much advantage over worker for
us. If num keepalives == maxclients, you can't accept anymore
connections.
that's a surprise, and it sounds like a bug. I'll investigate. it used to be
that maxclients was really max worker threads and you could have far more
connections
The subject is misleading; we do allow keepalives on any bogus
requests, today. Undoubtedly this is already used in the wild.
Attached is a patch that would drop keepalive for the garbage
requests that never even hit mod_forensics, or any other module,
due to bogus request lines or unparseable
[Posting for comments before I commit, because the
core filter code is somewhat complex...]
With the current core_output_filter() implementation,
if the client requests a file smaller than 8KB, the
filter reads the file bucket, makes a copy of the
contents, and sets the copy aside in hopes of
On Fri, Aug 16, 2002 at 12:34:11AM -0700, Brian Pane wrote:
[Posting for comments before I commit, because the
core filter code is somewhat complex...]
Looks correct based on my examination. +1. -- justin
On Mon, 24 Jun 2002, Cliff Woolley wrote:
You can. The buckets code is smart enough to (a) take no action if the
apr_file_t is already in an ancestor pool of the one you're asking to
setaside into and (b) just use apr_file_dup() to get it into the requested
pool otherwise to handle the pool
On Sun, 2002-06-23 at 23:12, Cliff Woolley wrote:
On Mon, 24 Jun 2002, Bill Stoddard wrote:
Yack... just noticed this too. This renders the fd cache (in
mod_mem_cache) virtually useless. Not sure why we cannot setaside a fd.
You can. The buckets code is smart enough to (a) take no
The more I think about it, though, the more I like the idea of just
writing the brigade out to the client immediately when we see EOS in
core_ouput_filter(), even if c-keepalive is true. If we do this,
the only bad thing that will happen is that if a client opens a
keepalive connection and
Howard St. [EMAIL PROTECTED]
San Francisco, CA
-Original Message-
From: Brian Pane [mailto:[EMAIL PROTECTED]]
Sent: Sunday, June 23, 2002 9:38 PM
To: [EMAIL PROTECTED]
Subject: core_output_filter buffering for keepalives? Re: Apache 2.0
Numbers
On Sun, 2002-06-23
From: Brian Pane [mailto:[EMAIL PROTECTED]]
Ryan Bloom wrote:
I think we should leave it alone. This is the difference between
benchmarks and the real world. How often do people have 8 requests
in a
row that total less than 8K?
As a compromise, there are two other options. You
With Apache 1.3 all you had to do to get a keep-alive was set your
content-length correctly:
HTTP/1.1 200 OK
Date: Mon, 24 Jun 2002 17:05:04 GMT
Server: Apache/1.3.22 (Unix) PHP/4.3.0-dev
X-Powered-By: PHP/4.3.0-dev
Content-length: 1024
Keep-Alive: timeout=15, max=100
Connection: Keep-Alive
Brian Pane wrote:
The more I think about it, though, the more I like the idea of just
writing the brigade out to the client immediately when we see EOS in
core_ouput_filter(), even if c-keepalive is true. If we do this,
the only bad thing that will happen is that if a client opens a
Ryan Bloom wrote:
From: Brian Pane [mailto:[EMAIL PROTECTED]]
Ryan Bloom wrote:
I think we should leave it alone. This is the difference between
benchmarks and the real world. How often do people have 8 requests
in a
row that total less than 8K?
As a compromise, there are
From: Brian Pane [mailto:[EMAIL PROTECTED]]
Ryan Bloom wrote:
From: Brian Pane [mailto:[EMAIL PROTECTED]]
Ryan Bloom wrote:
I think we should leave it alone. This is the difference between
benchmarks and the real world. How often do people have 8 requests
in a
row that
From: Brian Pane [mailto:[EMAIL PROTECTED]]
Ryan Bloom wrote:
From: Brian Pane [mailto:[EMAIL PROTECTED]]
Ryan Bloom wrote:
I think we should leave it alone. This is the difference between
benchmarks and the real world. How often do people have 8 requests
in
From: Bill Stoddard [mailto:[EMAIL PROTECTED]]
From: Brian Pane [mailto:[EMAIL PROTECTED]]
Ryan Bloom wrote:
From: Brian Pane [mailto:[EMAIL PROTECTED]]
Ryan Bloom wrote:
Wait a second. Now you want to stop buffering to fix a completely
differeny bug. The idea that we can't
Ryan Bloom wrote:
-1 on buffering across requests, because the performance problems
caused by the extra mmap+munmap will offset the gain you're trying
to achieve with pipelining.
Wait a second. Now you want to stop buffering to fix a completely
differeny bug. The idea that we can't keep
-1 on buffering across requests, because the performance problems
caused by the extra mmap+munmap will offset the gain you're trying
to achieve with pipelining.
Wait a second. Now you want to stop buffering to fix a completely
differeny bug. The idea that we can't keep a file_bucket
Bill Stoddard wrote:
...
Solve the problem to enable setting aside the open fd just long enough to check for a
pipelined request will nearly completely solve the worst part (the mmap/munmap) of
this
problem. On systems with expensive syscalls, we can do browser detection and
dynamically
Bill Stoddard wrote:
.
So changing the AP_MIN_BYTES_TO_WRITE just moves the relative postion of the write()
and
the check pipeline read.
It has one other side-effect, though, and that's what's bothering me:
In the case where core_output_filter() decides to buffer a response because
it's
On Mon, Jun 24, 2002 at 01:07:48AM -0400, Cliff Woolley wrote:
Anyway, what I'm saying is: don't make design decisions of this type based
only on the results of an ab run.
+1
I think at this point ab should have the ability to interleave issuing
new connections, handling current requests, and
Bill Stoddard wrote:
.
So changing the AP_MIN_BYTES_TO_WRITE just moves the relative postion of the write()
and
the check pipeline read.
It has one other side-effect, though, and that's what's bothering me:
In the case where core_output_filter() decides to buffer a response
On Mon, 24 Jun 2002, Bill Stoddard wrote:
Yack... just noticed this too. This renders the fd cache (in
mod_mem_cache) virtually useless. Not sure why we cannot setaside a fd.
You can. The buckets code is smart enough to (a) take no action if the
apr_file_t is already in an ancestor pool of
50 matches
Mail list logo