bruce wrote:
Hi.

Got a bit of a question/issue that I'm trying to resolve. I'm asking this of
a few groups so bear with me.

I'm considering a situation where I have multiple processes running, and
each process is going to access a number of files in a dir. Each process
accesses a unique group of files, and then writes the group of files to
another dir. I can easily handle this by using a form of locking, where I
have the processes lock/read a file and only access the group of files in
the dir based on the  open/free status of the lockfile.

However, the issue with the approach is that it's somewhat synchronous. I'm
looking for something that might be more asynchronous/parallel, in that I'd
like to have multiple processes each access a unique group of files from the
given dir as fast as possible.

So.. Any thoughts/pointers/comments would be greatly appreciated. Any
pointers to academic research, etc.. would be useful.

You say "each process accesses a unique group of files". Does this mean that no two processes access the same file?

If yes, then why do you need locking?

If no, then could you lock, move the files into a work folder, and then unlock? (Remember that moving a file on the same volume (disk) should be a quick renaming.) There could be one work folder per process, although that isn't necessary because each process would know (or be told) which of the files in the work folder belonged to it.
--
http://mail.python.org/mailman/listinfo/python-list

Reply via email to