On 2012-07-20 13:05, punit jain wrote:

I have a multiple processes which are modifying hash of hash of array. For
multiprocessing I am using Parallel::ForkManager

The requirement is I set Max processes to say 5. Each process is fired by
the script and max 5 parallel process runs and it performs some actions on
a complex datastructure like each process after performing a task based on
success moves user entry from pending hash key to completed hash key.

Now this task needs to be run on around 10K users and I need not run this
more than 1 hr after which I need to store the state disc which I can
revisit and start from point where I left.

I am planning to use Storable module for storing and retrieve later.

My question is do I need to implement locks on write operation as multiple
Procs will be writing data to same complex datastructure ? or just using
store() in each process will take care of simultaneous writes ?

Any pointers to some code will be helpful.

Always obey the 'Single Writer Principle'. (see Google)
Reading from shared resources is fine, writing to shared resources is forbidden.

The definition of 'shared resource' is whatever you find practical.
I for example often have multiple process inserting into the same MySQL table. But that table is partitioned/sharded, and only one process writes to the same partition/shard, at any point in time.
This is all about not needing any locks.

Design as if locks don't exist!

--
Ruud



--
To unsubscribe, e-mail: beginners-unsubscr...@perl.org
For additional commands, e-mail: beginners-h...@perl.org
http://learn.perl.org/


Reply via email to