Our project needed persistent socket connections open as well. There is
supposed to be a standard mechanism to pass file descriptors between unix
processes, though it's bugginess level depends on your OS. There is a perl
module for this called Socket::PassAccessRights. So what you can do is
I've got a "reality check" question for
people to see that I'm not missing something obvious with our Apache::Reload
mod_perl setup.
We've recently install Apache::Reload at
our site in production and it's working great. In what isprobably not the
best 'software engineering' style, we've
Just wondering if anyone has encountered this before and if it's been fixed
in libapreq for the upcoming release.
Basically, whenever I try and use Mozilla 0.97 with a file upload field on a
form and don't select any file in the field, libapreq seems to hang on the
$R-parse() call. Mozilla 0.98
We just experienced an odd problem and were wondering if anyone has
encountered this before. We recently set the apache LimitRequestBody
parameter to 1000 (10M) and all was working fine until a recent restart.
We started getting errors in the logs whenever there was a file upload field
in the
to have the problems, while when the other person restarted the
server, it suddenly fixed itself.
Rob
- Original Message -
From: Rob Mueller (fastmail) [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Monday, February 04, 2002 4:19 PM
Subject: Odd mod_perl and LimitRequestBody problem
We
I recently had a similar problem. A regex that worked fine in sample code
was a dog in the web-server code. It only happened with really long strings.
I tracked down the problem to this from the 'perlre' manpage.
WARNING: Once Perl sees that you need one of $, $`, or $'
anywhere in the
I've had a little bit of a look, but can't
find anything in the mod_perl guide about this. Basically it seems to me that
'my' variables at the package level don't retain their value under
mod_perl.
For instance, consider the following
mod_perl handler.
package My::Module;
my $var;
sub
The thing you were missing is that on an OS with an aggressively caching
filesystem (like Linux), frequently read files will end up cached in RAM
anyway. The kernel can usually do a better job of managing an efficient
cache than your program can.
For what it's worth, DeWitt Clinton
Some more points.
I'd like to point out
that I don't think the lack of actual concurrency testing is a real problem, at
leastfor most single CPU installations. If most of the time is spent doing other stuff in a request (which
is most likely the case), then on average when a process goes to
In general the Cache::* modules were designed for clarity and ease of
use in mind. For example, the modules tend to require absolutely no
set-up work on the end user's part and try to be as fail-safe as
possible. Thus there is run-time overhead involved. That said, I'm
certainly not
Just thought people might be
interested...
I sat down the other day and wrote a test
script to try out various caching implementations. The script is pretty basic at
the moment, I just wanted to get an idea of the performance of different
methods.
The basic scenario is the common mod_perl
Just wanted to add an extra thought that I
forgot to include in the previous post.
One important aspect missing from my tests
is the actual concurrency testing. In mostreal world programs, multiple
applications will be reading from/writing to the cache at the same time.
Depending on the
12 matches
Mail list logo