A few things:
- In your results, could you add the speedycgi version number (2.02),
and the fact that this is using the mod_speedycgi frontend.
The fork/exec frontend will be much slower on hello-world so I don't
want people to get the wrong idea. You may want to benchmark
- Original Message -
From: "Sam Horrocks" [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]; "mod_perl list" [EMAIL PROTECTED]
Sent: Saturday, January 06, 2001 4:37 PM
Subject: Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscripts
th
Are the speedycgi+Apache processes smaller than the mod_perl
processes? If not, the maximum number of concurrent requests you can
handle on a given box is going to be the same.
The size of the httpds running mod_speedycgi, plus the size of speedycgi
perl processes is
)
communicate with the backends, sending over the request and getting the output.
Speedycgi uses some shared memory (an mmap'ed file in /tmp) to keep track
of the backends and frontends. This shared memory contains the queue.
When backends become free, they add themselves at the front
This is planned for a future release of speedycgi, though there will
probably be an option to set a maximum number of bytes that can be
bufferred before the frontend contacts a perl interpreter and starts
passing over the bytes.
Currently you can do this sort of acceleration with script output
t: Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscripts
that contain un-shared memory
Are the speedycgi+Apache processes smaller than the mod_perl
processes? If not, the maximum number of concurrent requests you can
handle on a given box is going to be the same.
The
"Jeremy Howard" [EMAIL PROTECTED] wrote:
A backend server can realistically handle multiple frontend requests, since
the frontend server must stick around until the data has been delivered
to the client (at least that's my understanding of the lingering-close
issue that was recently discussed
0 stopped the paging. At concurrency
level 100, both mod_perl and mod_speedycgi showed similar rates with ab.
Even at higher levels (300), they were comparable.
That's what I would expect if both systems have a similar limit of how
many interpreters they can fit in RAM at once. Shared memory would help
Roger Espel Llima [EMAIL PROTECTED] writes:
"Jeremy Howard" [EMAIL PROTECTED] wrote:
I'm pretty sure I'm the person whose words you're quoting here,
not Jeremy's.
A backend server can realistically handle multiple frontend requests, since
the frontend server must stick around until the
Perrin Harkins wrote:
What I was saying is that it doesn't make sense for one to need fewer
interpreters than the other to handle the same concurrency. If you have
10 requests at the same time, you need 10 interpreters. There's no way
speedycgi can do it with fewer, unless it actually makes
with the following statement?
Under apache-1, speedycgi scales better than mod_perl with
scripts that contain un-shared memory
NO!
When you can write a trans handler or an auth handler with speedy, then I
might agree with you. Until then I must insist you add "mod_perl
Apache::Reg
Perrin Harkins wrote:
Keith Murphy pointed out that I was seeing the result of persistent HTTP
connections from my browser. Duh.
I must mention that, having seen your postings here over a long period,
anytime I can make you say "duh", my week is made. Maybe the whole
month.
That issue
"Jeremy Howard" [EMAIL PROTECTED] writes:
Perrin Harkins wrote:
What I was saying is that it doesn't make sense for one to need fewer
interpreters than the other to handle the same concurrency. If you have
10 requests at the same time, you need 10 interpreters. There's no way
At 10:17 PM 12/22/2000 -0500, Joe Schaefer wrote:
"Jeremy Howard" [EMAIL PROTECTED] writes:
[snipped]
I posted a patch to modproxy a few months ago that specifically
addresses this issue. It has a ProxyPostMax directive that changes
it's behavior to a store-and-forward proxy for POST data (it
Joe Schaefer wrote:
"Jeremy Howard" [EMAIL PROTECTED] writes:
I don't know if Speedy fixes this, but one problem with mod_perl v1 is
that
if, for instance, a large POST request is being uploaded, this takes a
whole
perl interpreter while the transaction is occurring. This is at least
one
with the following statement?
Under apache-1, speedycgi scales better than mod_perl with
scripts that contain un-shared memory
Maybe; but for one thing the feature set seems to be very different
as others have pointed out. Secondly then the test that was
originally quoted didn't have much
ction over to the kernel.
The problem is that at a high concurrency level, mod_perl is using lots
and lots of different perl-interpreters to handle the requests, each
with its own un-shared memory. It's doing this due to its LRU design.
But with SpeedyCGI's MRU design, only a few speedy_backends are
I think you could actually make speedycgi even better for shared memory
usage by creating a special directive which would indicate to speedycgi to
preload a series of modules. And then to tell speedy cgi to do forking of
that "master" backend preloaded module process and hand co
[ Sorry for accidentally spamming people on the
list. I was ticked off by this "benchmark",
and accidentally forgot to clean up the reply
names. I won't let it happen again :( ]
Matt Sergeant [EMAIL PROTECTED] writes:
On Thu, 21 Dec 2000, Ken Williams wrote:
Well then, why
benefits to mod_perl.
Sure, and that's why some people use it. But to say
"Speedycgi scales better than mod_perl with scripts that contain un-shared memory"
is to me quite similar to saying
"SUV's are better than cars since they're safer to drive drunk in."
Disc
At 09:53 AM 12/21/00 -0500, Joe Schaefer wrote:
[ Sorry for accidentally spamming people on the
list. I was ticked off by this "benchmark",
and accidentally forgot to clean up the reply
names. I won't let it happen again :( ]
Not sure what you mean here. Some people like the
"KW" == Ken Williams [EMAIL PROTECTED] writes:
KW Well then, why doesn't somebody just make an Apache directive to
KW control how hits are divvied out to the children? Something like
According to memory, mod_perl 2.0 uses a most-recently-used strategy
to pull perl interpreters from the thread
Perrin Harkins wrote:
[cut]
Doesn't that appear to be saying that whichever process gets into the
mutex first will get the new request? In my experience running
development servers on Linux it always seemed as if the the requests
would continue going to the same process until a request
0/0455.html
http://www.uwsg.iu.edu/hypermail/linux/kernel/9704.0/0453.html
The problem is that at a high concurrency level, mod_perl is using lots
and lots of different perl-interpreters to handle the requests, each
with its own un-shared memory. It's doing this due to its LRU design.
Folks, your discussion is not short of wrong statements that can be easily
proved, but I don't find it useful. Instead please read:
http://perl.apache.org/~dougm/modperl_2.0.html#new
Too quote the most relevant part:
"With 2.0, mod_perl has much better control over which PerlInterpreters
are
, speedycgi scales better than mod_perl with
scripts that contain un-shared memory
At 09:16 PM 12/21/00 +0100, Stas Bekman wrote:
[much removed]
So the moment mod_perl 2.0 hits the shelves, this possible benefit
of speedycgi over mod_perl becomes irrelevant. I think this more or less
summarizes this thread.
I think you are right about the summarization. However, I also think
with the following statement?
Under apache-1, speedycgi scales better than mod_perl with
scripts that contain un-shared memory
I don't know. It's easy to give a simple example and claim being better.
So far whoever tried to show by benchmarks that he is better, most often
was proved wrong
in a single interpreter, and an NT port.
I think you could actually make speedycgi even better for shared memory
usage by creating a special directive which would indicate to speedycgi to
preload a series of modules. And then to tell speedy cgi to do forking of
that "master" backend
?
Would you agree with the following statement?
Under apache-1, speedycgi scales better than mod_perl with
scripts that contain un-shared memory
I don't know. It's easy to give a simple example and claim being better.
So far whoever tried to show by benchmarks that he
the case a bit here. It's really easy to take
advantage of shared memory with mod_perl - I just add a 'use Foo' to my
startup.pl! It can be hard for newbies to understand, but there's nothing
difficult about implementing it. I often get 50% or more of my
application shared in this way. That's
[EMAIL PROTECTED] (Perrin Harkins) wrote:
Hi Sam,
[snip]
I am saying that since SpeedyCGI uses MRU to allocate requests to perl
interpreters, it winds up using a lot fewer interpreters to handle the
same number of requests.
What I was saying is that it doesn't make sense for one to need
n of which
system can fit more interpreters in RAM at once, and I still think
mod_perl would come out on top there because of the shared memory. Of
course most people don't run their servers at full throttle, and at less
than total saturation I would expect speedycgi to use less RAM and
possibly
FYI --
Sam just posted this to the speedycgi list just now.
X-Authentication-Warning: www.newlug.org: majordom set sender to
[EMAIL PROTECTED] using -f
To: [EMAIL PROTECTED]
Subject: [speedycgi] Speedycgi scales better than mod_perl with scripts
that contain un-shared memory
Date: Wed, 20 Dec
: majordom set sender to
[EMAIL PROTECTED] using -f
To: [EMAIL PROTECTED]
Subject: [speedycgi] Speedycgi scales better than mod_perl with scripts
that contain un-shared memory
Date: Wed, 20 Dec 2000 20:18:37 -0800
From: Sam Horrocks [EMAIL PROTECTED]
Sender: [EMAIL PROTECTED]
Reply-To: [EMAIL PROTECTED
Original Message
On 10/8/99, 3:06:58 AM, "Anthony Gardner" [EMAIL PROTECTED] wrote
regarding Re: Urgent--Any limit on Hash table in Shared Memory?:
The only problem I had when creating a hash with loads of data using
IPC::Shareable was that I constructed the hash incorrectly.
To beg
I used IPC::shareable module to construct a nested HASH table in
shared memory. It worked fine during "on-demand" test. When I move
from "on-demand" to "preload", An error came up saying that "No space
left on device". The machine has 0.5GB menory
hi,
so, i read all this stuff about using shared memory, preloading stuff to
each child doesn't have it's own copy. so i went ahead and compile sysV
shared memory into my kernel. however, ipcs tells me that nothing is
using shared memory:
server# ipcs -m
Shared Memory:
T ID KEY
101 - 138 of 138 matches
Mail list logo