Sebastian Noack added the comment:

Exactly, with my implemantation "the lock acquired first will be granted 
first". There is no way that either shared nor exclusive locks can starve, and 
therefore it should satisfy all use cases. Since you can only share simple 
datastructures like integers across processes, I also found that this seems to 
be the only policy (except ignoring the acquisition order at all), that can be 
implemented for multiprocessing.

I have also looked at the seqlock algorithm, which seems to be great for use 
cases where the exclusive lock is acquired rather rarely and where your 
"reader" code is in fact read-only and therefore can be repeated. But in any 
other case a seqlock would break your code. However the algorithm is ultra 
simple and can't be implemented as lock-like object anyway. Though you could 
implement it as context manager, but that would hide the fact that the "reader" 
code will be repeated. So if you find yourself that a seqlock is that what you 
need for your specific use case, you can just use the algorithm like below:

lock = multiprocessing.Value(0)
count = multiprocessing.Value(0)

def do_read():
  while True:
    if count.value % 2:
      continue
    data = ...
    if count.value % 2:
      continue
    return data

def do_write(data):
  with lock:
    count.value += 1
    # write data
    count.value += 1

I have also experimented with implementing a shared/exclusive lock on top of a 
pipe and UNIX file locks (https://gist.github.com/3818148). However it works 
only on Unix and only with processes (not threads). Also it turned out that 
UNIX file locks don't implement an acquisition order. So exclusive locks can 
starve, which renders it useless for most use cases.

----------

_______________________________________
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue8800>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to