On Aug 4, 9:30 am, Nikolaus Rath <[EMAIL PROTECTED]> wrote:
> Hello,
>
> I need to synchronize the access to a couple of hundred-thousand
> files[1]. It seems to me that creating one lock object for each of the
> files is a waste of resources, but I cannot use a global lock for all
> of them either (since the locked operations go over the network, this
> would make the whole application essentially single-threaded even
> though most operations act on different files).
>
> My idea is therefore to create and destroy per-file locks "on-demand"
> and to protect the creation and destruction by a global lock
> (self.global_lock). For that, I add a "usage counter"
> (wlock.user_count) to each lock, and destroy the lock when it reaches
> zero.
[snip]
> My questions:
>
>  - Does that look like a proper solution, or does anyone have a better
>    one?


You need the per-file locks at all if you use a global lock like
this.  Here's a way to do it using threading.Condition objects.  I
suspect it might not perform so well if there is a lot of competition
for certain keys but it doesn't sound like that's the case for you.
Performance and robustness improvements left as an exercise.  (Note:
I'm not sure where self comes from in your examples so I left it out
of mine.)


global_lock = threading.Condition()
locked_keys = set()

def lock_s3key(s3key):
    global_lock.acquire()
    while s3key in locked_keys:
        global_lock.wait()
    locked_keys.add(s3key)
    global_lock.release()

def unlock_s3key(s3key):
    global_lock.acquire()
    locked_keys.remove(s3key)
    global_lock.notifyAll()
    global_lock.release()



Carl Banks
--
http://mail.python.org/mailman/listinfo/python-list

Reply via email to