On Wed, Sep 17, 2014 at 1:18 PM, Mark Hedges <mark.hed...@ticketmaster.com>
wrote:

> For example, you could use a tied DBM/MLDBM hash, DBD::SQLite
> or another file-based database with access locking for your
> cache, and save it in a shared memory filesystem like /dev/shm.
>

I would suggest that too, if it fits the use case. BerkeleyDB,
Cache::FastMmap, Reddis, or Memcached are all common solutions. They don't
work for everything though. Declaring a global variable in the parent
process and loading it with data in the child processes won't share
anything. It means you're loading the entire data into unshared memory in
each child. If you want to share that data you have to load it before
forking. After a child process modifies it, changes are not shared back to
the parent or to other children.

As a side note, it's quite difficult to know how much memory is really
shared. If you want to know, look at Linux::Smaps, not top or ps.

I don't see any way that loading data in the child process could increase
the size of the parent process, but memory sharing behavior is complicated
and I could be missing something.

- Perrin

Reply via email to