Re: maintaining shared memory size (was: Re: swamped withconnection?)

2005-08-24 Thread Perrin Harkins
On Wed, 2005-08-24 at 07:02 -0400, Sean Davis wrote:
 As an aside, are there rules of thumb about what cache works best in various
 situations?

Cache::FastMmap and BerkeleyDB are the fastest by far, but they are
local to one machine.  Cache::Memcached or a simple key/value table in a
MySQL server are second fastest, and can be shared between machines.
After that, everything else is a lot slower.  Some stats are available
here:
http://cpan.robm.fastmail.fm/cache_perf.html

 Are RDBMS's, if accessible, generally a good solution?

For caching?  MySQL is surprisingly fast when handling primary key
requests  against a simple table.  It's faster than Cache::Cache and
most of the other caching modules.

- Perrin



Re: maintaining shared memory size (was: Re: swamped withconnection?)

2005-08-23 Thread Badai Aqrandista

On Tue, 2005-08-23 at 17:23 +1000, Badai Aqrandista wrote:
 How do I maintain the size of the shared memory between apache children?
 What cause a memory page to be copied (not shared) from perl's point of
 view?

Anything that writes to memory -- modifying any variable (even just
reading one in a different context) or compiling some code are the most
common things.  There's a bit more here:
http://modperlbook.com/html/ch10_01.html


Oh my, I should've kept memory consumption in mind when designing the code 
in the first place.


Anyway, to fix this, I'm trying to make my onw shared memory with 
Apache::SharedMem. But it seems that shared memory is just a memory area 
that any process can read from or write to, not a memory that can become 
part of any process's memory space.


I'd like to put the read only data in a memory area that can become part of 
any process's memory space, like threads sharing memory space. Is there any 
way I can do that? How about mmap? I have never looked into that.


Does this sound like fixing the wrong problem?

Thank you...

---
Badai Aqrandista
Cheepy (?)

_
On the road to retirement? Check out MSN Life Events for advice on how to 
get there! http://lifeevents.msn.com/category.aspx?cid=Retirement




Re: maintaining shared memory size (was: Re: swamped withconnection?)

2005-08-23 Thread Perrin Harkins
On Wed, 2005-08-24 at 10:31 +1000, Badai Aqrandista wrote:
 Anyway, to fix this, I'm trying to make my onw shared memory with 
 Apache::SharedMem.

Don't use that module.  It's very inefficient.

 But it seems that shared memory is just a memory area 
 that any process can read from or write to, not a memory that can become 
 part of any process's memory space.
 
 I'd like to put the read only data in a memory area that can become part of 
 any process's memory space, like threads sharing memory space. Is there any 
 way I can do that?

You can share read-only data by loading it into normal variables during
startup.  You can share read/write data by keeping it in a database
(including things like BerkeleyDB or Cache::FastMmap) and only reading
the small parts of it that you need in each process.

There is no way to access data in perl without the size of the process
you read the data from allocating memory to hold that data while you use
it.  In other words, shared memory is not useful for reducing the size
of existing processes unless those processes are currently each holding
a lot of data that they never use.

 Does this sound like fixing the wrong problem?

Yes.  Put a reverse proxy in front of your server, tune MaxClients so
you won't go into swap, and then benchmark to see how much load you can
handle.  Then think about tuning.

- Perrin