hi nigel...

using any kind of file locking process requires that i essentially have a
gatekeeper, allowing a single process to enter, access the files at a
time...

i can easily setup a file read/write lock process where a client app
gets/locks a file, and then copies/moves the required files from the initial
dir to a tmp dir. after the move/copy, the lock is released, and the client
can go ahead and do whatever with the files in the tmp dir.. thie process
allows multiple clients to operate in a psuedo parallel manner...

i'm trying to figure out if there's a much better/faster approach that might
be available.. which is where the academic/research issue was raised..

the issue that i'm looking at is analogous to a FIFO, where i have lots of
files being shoved in a dir from different processes.. on the other end, i
want to allow mutiple client processes to access unique groups of these
files as fast as possible.. access being fetch/gather/process/delete the
files. each file is only handled by a single client process.

thanks..



-----Original Message-----
From: python-list-bounces+bedouglas=earthlink....@python.org
[mailto:python-list-bounces+bedouglas=earthlink....@python.org]on Behalf
Of Nigel Rantor
Sent: Sunday, March 01, 2009 2:00 AM
To: koranthala
Cc: python-list@python.org
Subject: Re: file locking...


koranthala wrote:
> On Mar 1, 2:28 pm, Nigel Rantor <wig...@wiggly.org> wrote:
>> bruce wrote:
>>> Hi.
>>> Got a bit of a question/issue that I'm trying to resolve. I'm asking
>>> this of a few groups so bear with me.
>>> I'm considering a situation where I have multiple processes running,
>>> and each process is going to access a number of files in a dir. Each
>>> process accesses a unique group of files, and then writes the group
>>> of files to another dir. I can easily handle this by using a form of
>>> locking, where I have the processes lock/read a file and only access
>>> the group of files in the dir based on the  open/free status of the
>>> lockfile.
>>> However, the issue with the approach is that it's somewhat
>>> synchronous. I'm looking for something that might be more
>>> asynchronous/parallel, in that I'd like to have multiple processes
>>> each access a unique group of files from the given dir as fast as
>>> possible.
>> I don't see how this is synchronous if you have a lock per file. Perhaps
>> you've missed something out of your description of your problem.
>>
>>> So.. Any thoughts/pointers/comments would be greatly appreciated. Any
>>>  pointers to academic research, etc.. would be useful.
>> I'm not sure you need academic papers here.
>>
>> One trivial solution to this problem is to have a single process
>> determine the complete set of files that require processing then fork
>> off children, each with a different set of files to process.
>>
>> The parent then just waits for them to finish and does any
>> post-processing required.
>>
>> A more concrete problem statement may of course change the solution...
>>
>>    n
>
> Using twisted might also be helpful.
> Then you can avoid the problems associated with threading too.

No one mentioned threads.

I can't see how Twisted in this instance isn't like using a sledgehammer
to crack a nut.

   n
--
http://mail.python.org/mailman/listinfo/python-list

--
http://mail.python.org/mailman/listinfo/python-list

Reply via email to