[EMAIL PROTECTED] (Randal L. Schwartz) wrote:
I'm told that my CPU throttler
was used at etoys.com for a similar purpose, and permitted them to
keep from losing millions of dollars of revenue due to people
spidering their catalog.
That's correct, although it was actually a bunch of DoS
Sure - I believe in magic, depending on your definition of it. I KNOW
there's a 4th method, because I've seen it work. There is an e-commerce
web
site which uses an outside cart programmed in CGI (Perl?). The original
web
site passes no identifying marks such as the session ID through the URL
on 5/12/01 5:46 PM, Morbus Iff at [EMAIL PROTECTED] wrote:
I store a .stor file which is a storable dump of my XML tree. I check the
mtime of that against the mtime of the .xml file. Whichever is newer I
load that. Works fast and is very simple.
I'll certainly check it out.
The only
It's not hard to do, but it is potentially dangerous since you could
overwrite globals like $/ and change the behavior of your program. In
general, it's best to avoid cluttering the symbol table.
- Perrin
on 5/9/01 5:45 PM, Morbus Iff at [EMAIL PROTECTED] wrote:
Keep in mind, if you load this data during startup (in the parent) it will
be shared, but reloading it later will make a separate copy in each child,
chewing up a large amount of memory. You might have better luck using dbm
That is
on 5/9/01 5:14 PM, Morbus Iff at [EMAIL PROTECTED] wrote:
That, unfortunately doesn't tell me what causes a USR2 signal to be sent to
Apache.
You can use the kill command to send a USR2 signal.
Or when it's caused.
When you send it.
I only want to reload the file when said file
has
on 5/4/01 9:28 AM, Mark Maunder at [EMAIL PROTECTED] wrote:
I have an Apache::Registry script that is using XML::Parser. The parser throws
a
'die' call if it encounters a parse error (Why?).
Because it's an exception and the parser can't continue.
I was handling this by
putting
the code
on 4/30/01 8:47 PM, brian moseley at [EMAIL PROTECTED] wrote:
On Mon, 30 Apr 2001, Jeffrey W. Baker wrote:
type of exception. Right now I cannot in fact think of
any program I have written that branches on the type of
exception. Java encourages this with multiple catch
in CP Web Mail,
On Fri, 20 Apr 2001, Francesco Pasqualini wrote:
But are there in the mod_perl architecture some guidelines and/or
frameworks that encourages the MVC design patern ? I think that
Apache::ASP could be (for example) the right tool, adding the
"forward" feature.
The forward feature looks like
"Chutzpah" is an interesting way of putting it. I've been thinking
of them as "slimeballs in the busy of conning webkids into
thinking they have a real RDBM product".
(It isn't a moot point, because it's the same people working on
it: human character issues are actually relevant when
Can you briefly explain why it leaks memory?
I haven't tried it, but I'm guessing it's creating a new anonymous sub on
every request.
I have been playing with Apache::Leak and Devel::Leak trying to figure out
what is happening when Perl code leaks memory, but I haven't got my head
around it
b) Flat file : Create a Linux directory structure with the same hierarchy
as
the attributesi.e., directory structure has
publishers/sizes/types/ip numbers. ip numbers is the file name
which
contains a list of ads. Objective is to pick the right file, open this
file
and create a hash with
On 16 Apr 2001, Chip Turner wrote:
The modperl book mentions it double hashes to prevent a
malicious user from concatenating data onto the values being checked.
I don't know if they are referring to this weakness, but I suspect
they are. Sadly, the book doesn't seem to offer a
What I'm trying to do is have apache build the httpd.conf
file dynamically when it starts from a MySQL database.
It might be easier and more bulletproof to build the conf file off-line with
a simple perl script and a templating tool. We did this with Template
Toolkit and it worked well.
-
It might be easier and more bulletproof to build the conf file
off-line with
a simple perl script and a templating tool. We did this with Template
Toolkit and it worked well.
- Perrin
That would be fine and dandy, but it's not exactly what I'm going after.
Currently if I want to
Matt Sergeant wrote:
Is there a way I could use LocationMatch to specify a not condition?
as in
LocationMatch !~ "/(thisfile|thatDir|whatever).*"
SSLVerifyClient require
/LocationMatch
That would let me list the exceptions, and everything else would be
restricted by default..
As part of my ongoing effort to streamline my mod_perl apps, I've come
to discover the joy of constant subroutines and perl's ability to
inline or eliminate code at compile time. I have a solution that
works, but would be interested in seeing if others had better
syntactic sugar..
You
I am currently using Storables lock_store and lock_retrieve to maintain
a persistent data structure. I use a session_id as a key and each data
structure has a last modified time that I use to expire it. I was under
the impression that these two functions would be safe for concurrent
I'm trying to address 2 issues:
A. Avoiding a single point of failure associated with a
having a central repository for the data, such as a NFS
share or a single database server.
B. Avoiding the overhead from using heavyweight tools like
database replication.
So I've been
On Thu, 29 Mar 2001, Victor Michael Blancas wrote:
I'm planning to implement a DBI session management integrated with
Apache::ASP, much like how Apache::Session works.
Might as well just use Apache::Session, if it already does what you need.
Is this better for clustered web servers with a
On Tue, 27 Mar 2001, DeWitt Clinton wrote:
Which reminds me of something. These cache objects are not currently
thread safe. When should I start expecting multi-threaded
apache/mod_perl to become mainstream enough to warrant an overhaul of
the code? I imagine that nearly most Perl
have done a search on CPAN for "resume" and "cv" did not come up with
anything
like what i am doing
http://www.zeuscat.com/andrew/work/resume/formats.shtml
- Perrin
Ok, what about calling sync before accesing the database? (read and write)
Will it force the process to sync its data with the disk, or will it cause
the corruption of the file on the disk, as the process might have a stale
data?
Well, that's what we don't know. As David Harris pointed out,
On Tue, 20 Mar 2001, Stas Bekman wrote:
Is anyone aware of a safe to way to do multi-process read/write access
through a dbm module other than BerkeleyDB.pm without tie-ing and
untie-ing every time? I thought that was the only safe thing to do
because of buffering issues, but this seems
On Tue, 20 Mar 2001, John Mulkerin wrote:
There is no error message returned, it just goes back to the httpd 403
error screen.
What about in the error log? Have you read the DBI docs on how to get
your error message to print? You should either have RaiseError on or be
checking return codes
On Tue, 20 Mar 2001, Tim Gardner wrote:
I understand that the RSS is the resident size in KB and the SZ
column is the size of the process, but what should I be seeing in the
way of reduced memory? The 13MB/18MB is not much different from when
I don't preload anything. Should I be seeing
On Tue, 20 Mar 2001, Joshua Chamas wrote:
I know the tie/untie MLDBM::Sync strategy with DB_File is
slow, but what size data are you caching?
I'm not. Well, actually I am, but I use BerkeleyDB which handles its
own locking. I just noticed this in the Guide and figured that either it
was out
On Wed, 21 Mar 2001, Stas Bekman wrote:
You mean with DB_File? There's a big warning in the current version
saying not to do that, because there is some initial buffering that
happens when opening a database.
The warning says not to lock on dbm fd but an external file!
I think you'll
Stas Bekman wrote:
So basically what you are saying is that sync() is broken and shouldn't be
used at all. Something fishy is going on. The purpose of sync() is to
flush the modifications to the disk.
Saving changes to disk isn't the problem. The issue is that some of the
database gets
On Mon, 19 Mar 2001, Joshua Chamas wrote:
A recent API addition allows for a secondary cache layer with
Tie::Cache to be automatically used
When one process writes a change to the dbm, will the others all see it,
even if they use this?
- Perrin
On Mon, 19 Mar 2001, Charles J. Brabec wrote:
The Perl advocate's version:
mod_perl: Let's see you try to do this with Python.
I know you're only joking, but let's not fall into that trap of confusing
arrogance with advocacy. This is my chief complaint about Pythoners:
they're always
While working on adding info on Berkeley DB to the Guide, I came across
this statement:
"If you need to access a dbm file in your mod_perl code in the read only
mode the operation would be much faster if you keep the dbm file open
(tied) all the time and therefore ready to be used. This will
On Mon, 19 Mar 2001, John Mulkerin wrote:
I'm trying to use the plain vanilla TicketTool.pm from O'Reilly's mod
perl book, Apache Modules with Perl and C. It uses Tie::DBI to create
a hash of the mysql connection. When I run just the authentication
subroutine with Perl -d "authenticate.pm"
On Sat, 17 Mar 2001, Surat Singh Bhati wrote:
Once I generate someoutput or page using my handler, I want to
pass it to apache to process the #exec.
Apache::SSI does not support #exec and "PerlSSI disabled in DSO build"
I am using the DSO mod_perl.
Any solution?
Apache::SSI does
On Wed, 14 Mar 2001, Issac Goldstand wrote:
I still think that the above line is confusing: It is because mod_perl is
not sending headers by itelf, but rather your script must provide the
headers (to be returned by mod_perl). However, when you just say "mod_perl
will send headers" it is
On Tue, 13 Mar 2001, Issac Goldstand wrote:
The only problem was the PerlSendHeaders option. The first fifty or
so times that I read the manpages, I understood that PerlSendHeader On
means that mod_perl will SEND HEADERS, and that off meant supply your
own... Somehow I figured out
On Tue, 13 Mar 2001, Andrew Ho wrote:
PHUm, you're getting me confused now, but PerlSendHeader On means that
PHmod_perl WILL send headers.
I recognize this confusion. Most recovering CGI programmers think that
"PerlSendHeader On" means that you no longer have to do this in your CGI:
I'm very intrigued by your thinking on locking. I had never
considered the transaction based approach to caching you are referring
to. I'll take this up privately with you, because we've strayed far
off the mod_perl topic, although I find it fascinating.
One more suggestion before you take
Can I ask why you are not useing IPC::Sharedlight (as its pure C and
apparently much faster than IPC::Shareable - I've never benchmarked it
as I've also used IPC::Sharedlight).
Full circle back to the original topic...
IPC::MM is implemented in C and offers an actual hash interface backed by
On Sat, 10 Mar 2001, Christian Jaeger wrote:
For all of you trying to share session information efficently my
IPC::FsSharevars module might be the right thing. I wrote it after
having considered all the other solutions. It uses the file system
directly (no BDB/etc. overhead) and provides
"Daniel Little (Metrex)" wrote:
Along the same lines, how about making SizeAwareMemoryCache as well so that
you can specify just how much data you want stored in the cache.
Sounds like Joshua Chamas' Tie::Cache module. It provides a
size-limited LRU cache.
- Perrin
Christian Jaeger wrote:
Yes, it uses a separate file for each variable. This way also locking
is solved, each variable has it's own file lock.
You should take a look at DeWitt Clinton's Cache::FileCache module,
announced on this list. It might make sense to merge your work into
that module,
At the upcoming ApacheCon in April, Bill Hilf and I will be presenting a
talk called "Building a Large-Scale E-Commerce Site with Apache and
mod_perl." One of the things we'll be covering is our use of Berkeley
DB, including some problems we encountered with it and our
recommendations on how to
I have some preliminary benchmark code -- only good for relative
benchmarking, but it is a start. I'd be happy to post the results
here if people are interested.
Please do.
- Perrin
On Tue, 6 Mar 2001, Daniel wrote:
Hi, I'm having a bit of trouble using Apache::Speedlimit
...
After running for about 5 minutes the handler dies with the following in
the logfile:
[Tue Mar 6 17:32:07 2001] [error] [Tue Mar 6 17:32:07 2001] null:
Munged shared memory segment (size
I am writing an apache perl module which logs HTTP
environment variables. This is fine for static content
(html, images) but is a problem for dynamic content
such as php.
Why doesn't Log Format work for you?
- Perrin
But when I print all the values of @INC in mod-perl
through browser ,I see duplicate entries for my
directory.But under CGI, I don't see any
What might be the reason?
I can think of two possibilities. First, you might be adding
/usr/local/apache/lib/perl (or where ever your Apache lives +
On Wed, 28 Feb 2001, Jason Terry wrote:
My problem is that recently I have had some users that are getting
impatient and hitting the reload/refresh button OFTEN. In some
instances this causes one single person to have over 40 httpd children
service JUST them. This causes my server to start
Adi Fairbank wrote:
I am trying to squeeze more performance out of my persistent session cache. In
my application, the Storable image size of my sessions can grow upwards of
100-200K. It can take on the order of 200ms for Storable to deserialize and
serialize this on my (lousy) hardware.
I have a high traffic website (looks like 200 GB output per month,
something around 10-20 hits per day) hosted on a commercial
service. The service does not limit my bandwidth usage, but they limit the
number of concurrent Apache process that I can have to 41. This causes the
server
On Thu, 15 Feb 2001, Stas Bekman wrote:
I might be barking at the wrong tree, but why cron? Why don't you use
at(1).
And there's a CPAN module for it: Schedule::At. It claims to be
cross-platform, and I believe NT has a version of at(1).
- Perrin
On Thu, 15 Feb 2001, Matt Sergeant wrote:
Its just a convenience thing. I've wanted to be able to do this too, for
example to have emails go off at a particular interval. So yes, it can be
done as cron + URI, but I'm just jealous of AOLServer's ability to do it
all in one. This is especially
Huh? Why would you call it if there's nothing to do? Are you thinking
you'll write a cron-ish task/timing spec for your Perl app and just use
the cron triggers as a constant clock?
Yes, exactly. My plan is to have a table with the tasks in my database,
and check expired tasks in a
On Wed, 14 Feb 2001, Pierre Phaneuf wrote:
I guess two persons "simpler" aren't always the same: I find it easier
laying out a table and querying it than hacking something to fiddle with
my crontab safely.
As far as I know, crontab -e is perfectly safe.
- Perrin
On Tue, 13 Feb 2001, Pierre Phaneuf wrote:
Well, if I call the "check for things to do" URI every minute, then I'll
be just fine. Many times, I'll just check and find nothing to do
Huh? Why would you call it if there's nothing to do? Are you thinking
you'll write a cron-ish task/timing spec
On Mon, 12 Feb 2001, Aaron Schlesinger wrote:
I have a line in my httpd.conf:
PerlRequire /path/to/startup.pl
In startup.pl I have this line:
use lib '/path/to/module';
This is not being added to my @INC like it should. Any
thoughts?
How do you know it isn;t being added? Try
On Thu, 8 Feb 2001, L.M.Orchard wrote:
Now, if only I could get back to un-mothballing Iaijutsu/Iaido and do Zope
the right way under perl... :)
When I first looked at OI, I was thinking that it has a lot of the
plumbing (O/R mapping, security model, application model) covered and you
could
On Thu, 8 Feb 2001, Stephane Bortzmeyer wrote:
On Tuesday 6 February 2001, at 21 h 57, the keyboard of Chris Winters
[EMAIL PROTECTED] wrote:
I'm jazzed to announce the public release of OpenInteract, an
extensible web application framework using mod_perl and the Template
Toolkit as
On Thu, 8 Feb 2001, Vivek Khera wrote:
Ok... Upgrade to "Apache/1.3.17 (Unix) mod_perl/1.25_01-dev" fixed the
object destroy issue. Yay!
Old versions were Apache 1.3.14 and mod_perl 1.24_02-dev.
Well, that is odd since I'm running 1.3.12 and 1.24_01, but you never know
what evils might be
On Wed, 7 Feb 2001, Vivek Khera wrote:
Ok... here's a mini-plugin that exhibits this behavior, and a command
line script for it.
This all looks like it should work, and the plugin object should get
destroyed. Until someone finds the source of the problem, you could work
around it by keeping
On Wed, 7 Feb 2001, Vivek Khera wrote:
Did you (or anyone else) reproduce the non-destroy of my mini-plugin?
I didn't actually run it; just poked through the code.
I'd like to at least know if I'm doing something wrong in mod_perl. I
find it disconcerting to have my plugin objects sitting
On Wed, 7 Feb 2001, Vivek Khera wrote:
Did you (or anyone else) reproduce the non-destroy of my mini-plugin?
I'd like to at least know if I'm doing something wrong in mod_perl. I
find it disconcerting to have my plugin objects sitting around unused
and unreaped, aka, memory leakage.
Okay, I
On Tue, 6 Feb 2001, Vivek Khera wrote:
However, at the end of the template processing, the object is not
destroyed; that is, the DESTROY() method is never called, and
therefore the tied hash never gets untied and Apache::Session::MySQL
doesn't get a chance to write the data back to the
On Tue, 6 Feb 2001, darren chamberlain wrote:
Vivek Khera ([EMAIL PROTECTED]) said something to this effect on 02/06/2001:
However, at the end of the template processing, the object is not
destroyed; that is, the DESTROY() method is never called, and
therefore the tied hash never gets
Chris Winters wrote:
I'm jazzed to announce the public release of OpenInteract, an
extensible web application framework using mod_perl and the Template
Toolkit as its core technologies.
Hi Chris,
I've been reading the docs for the last couple of days and it looks very
interesting. It's
Robert Landrum wrote:
I have some very large httpd processes (35 MB) running our
application software. Every so often, one of the processes will grow
infinitly large, consuming all available system resources. After 300
seconds the process dies (as specified in the config file), and the
Drew Taylor wrote:
I have a slightly different twist on this question. We run Registry scripts on
our site for debugging purposes. I would love to have a module for saving
variables/data structures on a per-request basis (like the current Apache
notes), but internally using pnotes under
[EMAIL PROTECTED] wrote:
I'm interested in doing rate-limiting with Apache. Basically, I want
to give Apache a target bitrate to aim at. When writing to one user,
it writes as close to bitrate as the user/network can suck it down.
When writing to two users (two connections), it writes to
Gunther Birznieks wrote:
GoF did not introduce Model-View-Controller architecture. But it is
discussed in Wiley's "A System of Patterns: Pattern-Oriented Software
Architecture".
MVC is frequently used in mod_perl apps. For example, see
Apache::PageKit.
- Perrin
Dave Rolsky wrote:
On Mon, 5 Feb 2001, Perrin Harkins wrote:
First, BSD::Resource can save you from these. It will do hard limits on
memory and CPU consumption. Second, you may be bale to register a
handler for a signal that will generate a stack trace. Look at
Devel::StackTrace
Joshua Chamas wrote:
Hey,
Per Perrin Harkin's advice, and my client's consent, I hacked
up Apache::SizeLimit to support MAX_PROCESS_UNSHARED_SIZE config,
where instead of limiting by the apparent process size, one limits
by the amount of unshared memory being used.
I actually did submit
Where I'm getting hosed is that %config and %session have data
I need visible to the Text::Template objects themselves. I've
RTFM'ed until my eyes are pink, and I see no options short of
copying variables wholesale into another Package, but then I
still can't get at them and "use strict"
On Tue, 23 Jan 2001, John Hughes wrote:
I had already reached the same conclusion after I saw that
everyone would have to remember to say "my Dog $spot;" every time or the
whole thing falls apart.
Falls apart? How?
If you forget the "Dog" part somewhere, it's slower than a normal
On Mon, 22 Jan 2001 [EMAIL PROTECTED] wrote:
(section 4.3, pp 126-135) I hadn't heard about pseudo-hashes. I now
desire a data structure with non-numeric keys, definable iteration
order, no autovivification, and happy syntax. (And, of course,
fast-n-small :-) Having Conway's blessing is nice
"Christopher L. Everett" wrote:
So what I'd like to know is: is there any way of picking up
configuration
info from the httpd-perl.conf at server startup?
If you don't need to have different configurations for each virtual host
or directory, you could just use globals.
Perl
Todd Finney wrote:
The one-sentence version of my question is: Is there a
problem with tying a session twice during two different
HeaderParserHandlers, as long as your doing the standard
cleanup stuff (untie | make_modified) in each?
It seems like the answer should be no unless there's some
Sam Horrocks wrote:
say they take two slices, and interpreters 1 and 2 get pre-empted and
go back into the queue. So then requests 5/6 in the queue have to use
other interpreters, and you expand the number of interpreters in use.
But still, you'll wind up using the smallest number of
On Fri, 19 Jan 2001, Sam Horrocks wrote:
You know, I had brief look through some of the SpeedyCGI code yesterday,
and I think the MRU process selection might be a bit of a red herring.
I think the real reason Speedy won the memory test is the way it spawns
processes.
Please take
On Wed, 17 Jan 2001, Sam Horrocks wrote:
If in both the MRU/LRU case there were exactly 10 interpreters busy at
all times, then you're right it wouldn't matter. But don't confuse
the issues - 10 concurrent requests do *not* necessarily require 10
concurrent interpreters. The MRU has an
On Wed, 17 Jan 2001, ___cliff rayman___ wrote:
here is an excerpt from httpd.h:
Good reading. Thanks.
It looks as if Apache should find the right number of servers for a steady
load over time, but it could jump up too high for a bit when the load
spike first comes in, pushing into swap if
I've heard mod_perm costs a lot more than its worth. There was an
open-source clone called mod_home_perm but it wasn't very successful.
Some people say you should skip it altogether and just use mod_hat.
On Thu, 18 Jan 2001, Terry Newnham wrote:
My boss has asked me to set up a web server on
On Wed, 17 Jan 2001, ___cliff rayman___ wrote:
i and others have written on the list before, that pushing apache
children into swap causes a rapid downward spiral in performance. I
don't think that MaxClients is the right way to limit the # of
children. i think MaxSpareCoreMemory would make
The RPM/tarball option worries me a
bit, since if I do forget a file, then I'll be down for a while, plus I
don't have another machine of the same type where I can create the
tarball.
There's no substitute for testing. If it's really important to have a very
short down time, you need a
On Tue, 16 Jan 2001, Honza Pazdziora wrote:
The machines are alright memorywise, they seem to be a bit slow on
CPU, however what bothers me is the deadlock situation to which they
get. No more slow crunching, they just stop accepting connections.
I've only seen that happen when something was
On Mon, 15 Jan 2001, Vivek Khera wrote:
I tend to write my apps in a modular fashion, so that each module
connects to the database and fetches its data and then disconnects.
Often, a program will require data from several modules resulting in a
lot of wasted connect/disconnect ops.
On Mon, 15 Jan 2001, Dave Armstrong wrote:
I just moved from dedicated to virtual hosting sigh, and was
wondering how to configure mod_perl to install the modules to a
private lib, outside of @INC.
http://perl.apache.org/guide/install.html#Installing_Perl_Modules_into_a_D
- Perrin
On Mon, 15 Jan 2001, Ask Bjoern Hansen wrote:
I tend to set the number to N number of requests. If each httpd child
needs to be forked every 1 requests that's pretty insignificant
and it can save you from some blowups.
The reason I like using SizeLimit instead of a number of requests is
but it's a bummer that the parent
doesn't run END blocks. Will it run cleanup handlers?
Cleanup handlers are run by child processes. What it has to do with
parent? Or do I miss something?
I meant "is there a way to run a cleanup handler in the parent after it's
work is done?", but I
On Thu, 11 Jan 2001, Rob Bloodgood wrote:
Second of all, with the literally thousands of pages of docs necessary to
understand in order to be really mod_perl proficient
Most of the documentation is really reference-oriented. All the important
concepts in mod_perl performance tuning fit in a
On Thu, 11 Jan 2001 [EMAIL PROTECTED] wrote:
I think you're making this much harder than it needs to be. It's this
simple:
MaxClients 30
PerlFixupHandler Apache::SizeLimit
Perl
use Apache::SizeLimit;
# sizes are in KB
$Apache::SizeLimit::MAX_PROCESS_SIZE = 3;
On Thu, 11 Jan 2001, Doug MacEachern wrote:
of course, there is such a "trick"
[EMAIL PROTECTED]">http://forum.swarthmore.edu/epigone/modperl/thandflunjimp/[EMAIL PROTECTED]
Documentation patch attached.
- Perrin
1039a1040,1046
Cleanup functions registered in the parent process (before
On Wed, 10 Jan 2001, Dave Rolsky wrote:
Is there any way to distinguish between a child being shutdown (say
maxrequests has been exceeded) versus all of Apache going down (kill
signal sent to the original process or something).
Register an END block in your startup.pl, and have it check it's
On Wed, 10 Jan 2001, Scott Alexander wrote:
It really peaked at 14:38:41 and then in the error_log
Ouch! malloc failed in malloc_block()
DBI-connect failed: Too many connections at
/systems/humakpro/lib/library.pm line 213
[Wed Jan 10 14:38:41 2001] [error] Can't call method "prepare"
On Thu, 11 Jan 2001, Stas Bekman wrote:
the parent process doesn't run the END block.
Randal's solution is probably better, but it's a bummer that the parent
doesn't run END blocks. Will it run cleanup handlers?
- Perrin
What I would like to see though is instead of killing the
child based on VmRSS on Linux, which seems to be the apparent
size of the process in virtual memory RAM, I would like to
kill it based on the amount of unshared RAM, which is ultimately
what we care about.
We added that in, but
On Tue, 9 Jan 2001, Joshua Chamas wrote:
Perrin Harkins wrote:
We added that in, but haven't contributed a patch back because our hack only
works on Linux. It's actually pretty simple, since the data is already
there on Linux and you don't need to do any special tricks
On Tue, 9 Jan 2001, Elman Vagif Abdullaev wrote:
Does anyone know if there is a module that enables dynamic cache
allocation for apache web server on the proxy?
"Dynamic cache allocation" could mean anything. Can you be more specific?
- Perrin
On Tue, 9 Jan 2001, Rob Bloodgood wrote:
I have a machine w/ 512MB of ram.
unload the webserver, see that I have, say, 450MB free.
So I would like to tell apache that it is allowed to use at most 425MB.
I was thinking about that at some point too. The catch is, different
applications have
On Tue, 9 Jan 2001, Rob Bloodgood wrote:
OK, so my next question about per-process size limits is this:
Is it a hard limit???
As in,
what if I alloc 10MB/per and every now then my one of my processes spikes
to a (not unreasonable) 11MB? Will it be nuked in mid process? Or just
On Tue, 9 Jan 2001, Rob Bloodgood wrote:
It's not a hard limit, and I actually only have it check on every other
request. We do use hard limits with BSD::Resource to set maximums on CPU
and RAM, in case something goes totally out of control. That's just a
safety though.
chokes JUST a
901 - 1000 of 1288 matches
Mail list logo