On Jan 20, 2:39 pm, Paul Rubin <http://phr...@nospam.invalid> wrote:
> Rhamphoryncus <rha...@gmail.com> writes:
> > "lock free" is largely meaningless.  What it really means is "we use
> > small hardware locks rather than big software locks, thereby reducing
> > (but not eliminating!) the contention".
>
> At least in the case of Haskell's "software transactional memory",
> reads are genuinely lock free in the normal uncontended case.  You use
> LOCK XCHG to increment a counter in the object when you want to
> update, and increment it again when you are done updating.  Since the
> counter starts at 0, any time it has an odd value, an update is in
> process and any other thread can notice that the object is locked and
> try again later without asserting any locks itself.  If the counter
> has an even value, it is unlocked, but there is a chance that a writer
> thread can lock and udpate the object while a reader thread has a
> lockless access in progress.  The reader detects this has occurred
> when it finishes its transaction, read the counter a second time, and
> notices that the counter has been incremented (maybe more than once)
> since the transaction started; it then rolls back its operation and
> tries again.  I'm not explaining that well so you can read about it in
> SPJ's paper or Keir Fraser's.

a) The contended case is the issue, not the uncontended case.  An
uncontended lock is just constant overhead, not a barrier to
scalability
b) Locks on linux (since the switch to futexes) are pretty close to
free when uncontended.

Transactions are really for different properties:
a) read patterns can be uncontended (fully scalable)
b) a preempted writer does not block other writers, guaranteeing
forward progress
c) ad-hoc combination of several objects into a single atomic update


> > The second issue is the objects themselves, like a list which is
> > mutable.  If you're using it in a single thread or writing from
> > multiple threads this is a non-trivial constant cost.  If your object
> > is not modified after creation and is read from many threads a lock
> > would be a point of contention, preventing you from scaling freely.
> > The dicts used by classes and globals are an import example of this,
> > and a successful implementation needs something non-contending.  I
> > assume Jython and IronPython do this.
>
> I'm pretty sure Jython makes no attempt at all to mess with ref
> counts--it just relies on the underlying Java gc.  I have no idea
> about IronPython.

The second issue has *nothing* to do with refcounts.  It is the
objects themselves.  A classic example is trying to do "i += 1", which
breaks down into "i = i + 1", which isn't an atomic operation.

Java's ConcurrentHash map gets this right.
--
http://mail.python.org/mailman/listinfo/python-list

Reply via email to