Ian Kallen wrote:
If I were you, I'd install my own perl in /home/eedalf, create
/home/eedlf/apache and then do (assuming ~/bin/perl is before
/opt/local/bin/perl in your path) something like:
Thanks, that's how I had it before - with Perl 5.6.0, Apache
1.1.3 and mod_perl 1.24 in my home
Sorry, s#1\.1\.3#1.3.13#
-Original Message-
From: Alexander Farber (EED) [mailto:[EMAIL PROTECTED]]
Sent: Thursday, January 04, 2001 4:58 AM
To: [EMAIL PROTECTED]
Subject: How do you run libapreq-0.31/eg/perl/file_upload.pl ?
[snip]
2) After putting
PerlModule Apache::Request
Location
Sorry for the late reply - I've been out for the holidays.
By the way, how are you doing it? Do you use a mutex routine that works
in LIFO fashion?
Speedycgi uses separate backend processes that run the perl interpreters.
The frontend processes (the httpd's that are running
-Original Message-
From: Tom Karlsson [mailto:[EMAIL PROTECTED]]
Sent: Thursday, January 04, 2001 8:09 AM
To: [EMAIL PROTECTED]
Subject: mod_perl confusion.
Hello All,
I've recently looked through the mod_perl mail archives in
order to find
someone who has/had the same
JR Mayberry [EMAIL PROTECTED] wrote:
The Modperl handler benchmark, which was done on a dual P3 500mhz on
Linux does serious injustice to mod_perl. Anyone who uses Linux knows
how horrible it is on SMP, I think some tests showed it uses as litle as
25% of the second processor..
It's an old
This is planned for a future release of speedycgi, though there will
probably be an option to set a maximum number of bytes that can be
bufferred before the frontend contacts a perl interpreter and starts
passing over the bytes.
Currently you can do this sort of acceleration with script output
- Original Message -
From: "Sam Horrocks" [EMAIL PROTECTED]
To: "Perrin Harkins" [EMAIL PROTECTED]
Cc: "Gunther Birznieks" [EMAIL PROTECTED]; "mod_perl list"
[EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Thursday, January 04, 2001 6:56 AM
Subject: Re: Fwd: [speedycgi] Speedycgi scales
This may or may not be a mod_perl question:
I want to change the way an existing request is handled and it can be done
by making a proxy request to a different host but the argument list must
be slightly different.It is something that a regexp substitution can
handle and I'd prefer for the
On Thu, 4 Jan 2001, Roger Espel Llima wrote:
JR Mayberry [EMAIL PROTECTED] wrote:
Linux does serious injustice to mod_perl. Anyone who uses Linux knows
how horrible it is on SMP, I think some tests showed it uses as litle as
25% of the second processor..
A simple benchmark with 'ab'
Hi there,
On Thu, 4 Jan 2001, Justin wrote:
So dropping maxclients on the front end means you get clogged
up with slow readers instead, so that isnt an option..
Try looking for Randall's posts in the last couple of weeks. He has
some nice stuff you might want to have a play with. Sorry, I
-Original Message-
From: G.W. Haywood [mailto:[EMAIL PROTECTED]]
Sent: Thursday, January 04, 2001 10:35 AM
To: Justin
Cc: [EMAIL PROTECTED]
Subject: Re: the edge of chaos
Hi there,
On Thu, 4 Jan 2001, Justin wrote:
So dropping maxclients on the front end means you get
"J" == Justin [EMAIL PROTECTED] writes:
J When things get slow on the back end, the front end can fill with
J 120 *requests* .. all queued for the 20 available modperl slots..
J hence long queues for service, results in nobody getting anything,
You simply don't have enough horsepower to serve
"Jeremy Howard" [EMAIL PROTECTED] wrote:
A backend server can realistically handle multiple frontend requests, since
the frontend server must stick around until the data has been delivered
to the client (at least that's my understanding of the lingering-close
issue that was recently discussed
"TK" == Tom Karlsson [EMAIL PROTECTED] writes:
TK I've recently looked through the mod_perl mail archives in order to find
TK someone who has/had the same problem as me.
You should have found discussion about the variable
$Apache::Registry::NameWithVirtualHost in the archives. Curiously, it
In answering another question today, I noticed that the variable
$Apache::Registry::NameWithVirtualHost is not documented in the
perldoc for Apache::Registry.
While scanning the Registry.pm file, I further noticed that there is a
call to $r-get_server_name for the virtual host name. This too is
but maybe someone can provide
me a small kick-start? Thank you
short answer -- you don't need anything more that some simple
scripting. Nothing at all in the server start up file.
client html file
form action=myupload.plx encoding ='MULTIPART/FORM-DATA'
method=post
input type=file
On Thu, Jan 04, 2001 at 09:55:39AM -0500, Blue Lang wrote:
Eh, ab isn't really made as anything other than the most coarsely-grained
of benchmarks. Concurrency testing is useless because it will measure the
ratio of requests/second/processor, not the scalability of requests from
single to
Hi,
I work on a high-traffic site that uses apache/mod_perl, and we're
seeing some occasional segmentation faults and bus errors in our
apache error logs. These errors sometimes result in the entire apache
process group going down, though it seems to me that the problems
originate
-Original Message-
From: Vivek Khera [mailto:[EMAIL PROTECTED]]
Sent: Thursday, January 04, 2001 12:23 PM
To: Mod Perl List
Subject: missing docs
In answering another question today, I noticed that the variable
$Apache::Registry::NameWithVirtualHost is not documented in the
Hi,
I work on a high-traffic site that uses apache/mod_perl, and we're
seeing some occasional segmentation faults and bus errors in our
apache error logs. These errors sometimes result in the entire
apache process group going down, though it seems to me that the
problems
does anyone have any experience with ab and sending multiple cookies ?
It appears to be chaining cookies together, ie:
I'm doing -C cookie1=value1 -C cookie2=value2
and im retreiving cookies with
CGI::Cookie-parse($r-header_in('Cookie'));
and foreaching %cookies and its doing something like
Hi,
Thanks for the links! But. I wasnt sure what in the first link
was useful for this problem, and, the vacuum bots discussion
is really a different topic.
I'm not talking of vacuum bot load. This is real world load.
Practical experiments (ok - the live site :) convinced me that
the well
I need more horsepower. Yes I'd agree with that !
However... which web solution would you prefer:
A. (ideal)
load equals horsepower:
all requests serviced in =250ms
load slightly more than horsepower:
linear falloff in response time, as a function of % overload
..or..
B. (modperl+front
i see 2 things here, classic queing problem, and the fact
that swapping to disk is 1000's of times slower than serving
from ram.
if you receive 100 requests per second but only have the
ram to serve 99, then swapping to disc occurs which slows
down the entire system. the next second comes and
Justin wrote:
Thanks for the links! But. I wasnt sure what in the first link
was useful for this problem, and, the vacuum bots discussion
is really a different topic.
I'm not talking of vacuum bot load. This is real world load.
Practical experiments (ok - the live site :) convinced me that
On Wed, Jan 03, 2001 at 12:02:15AM -0600, Jeff Sheffield wrote:
I am ashamed ... I twitled with the shiny bits.
my $auth_name = "WhatEver";
$SECRET_KEYS{ $auth_name } = "thisishtesecretkeyforthisserver";
### END MY DIRTY HACK
Note that without MY DIRTY LITTLE HACK it does not set those two
Does anyone out there have a clean, happy solution to the problem of users
jamming on links buttons? Analyzing our access logs, it is clear that it's
relatively common for users to click 2,3,4+ times on a link if it doesn't
come up right away. This not good for the system for obvious reasons.
I
"Ed" == Ed Park [EMAIL PROTECTED] writes:
Ed Has anyone else thought about this?
If you're generating the form on the fly (and who isn't, these days?),
just spit a serial number into a hidden field. Then lock out two or
more submissions with the same serial number, with a 24-hour retention
of
Sorry if this solution has been mentioned before (i didn't read the earlier
parts of this thread), and I know it's not as perfect as a server-side
solution...
But I've also seen a lot of people use javascript to accomplish the same
thing as a quick fix. Few browsers don't support javascript.
"Gunther" == Gunther Birznieks [EMAIL PROTECTED] writes:
Gunther But I've also seen a lot of people use javascript to accomplish the
Gunther same thing as a quick fix. Few browsers don't support javascript. Of
Gunther the small amount that don't, the venn diagram merge of browsers that
Gunther
Yeah, but in the real world regardless of the FUD about firewalls and the
like...
The feedback that I have had from people using this technique is that the
apps that have had this code implemented experience dramatic reduction in
double postings to the point where they no longer exist.
And
Hi Sam,
I think we're talking in circles here a bit, and I don't want to
diminish the original point, which I read as "MRU process selection is a
good idea for Perl-based servers." Your tests showed that this was
true.
Let me just try to explain my reasoning. I'll define a couple of my
base
- Original Message -
From: "Justin" [EMAIL PROTECTED]
To: "Geoffrey Young" [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Thursday, January 04, 2001 4:55 PM
Subject: Re: the edge of chaos
Practical experiments (ok - the live site :) convinced me that
the well recommended modperl
- Original Message -
From: "Ed Park" [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Thursday, January 04, 2001 6:52 PM
Subject: getting rid of multiple identical http requests (bad users
double-clicking)
Does anyone out there have a clean, happy solution to the problem of users
Roger Espel Llima [EMAIL PROTECTED] writes:
"Jeremy Howard" [EMAIL PROTECTED] wrote:
I'm pretty sure I'm the person whose words you're quoting here,
not Jeremy's.
A backend server can realistically handle multiple frontend requests, since
the frontend server must stick around until the
36 matches
Mail list logo