Melvin Smith <[EMAIL PROTECTED]> wrote:

> I thought we were discussing correct behavior of a shared data structure,
> not general cases. Or maybe this is the general case and I should
> go read more backlog? :)

Basically we have three kinds of locking:
- HLL user level locking [1]
- user level locking primitives [2]
- vtable pmc locking to protect internals

Locking at each stage and for each PMC will be slow and can deadlock
too. Very coarse grained locking (like Pythons interpreter_lock) doesn't
give any advantage on MP systems - only one interpreter is running at
one time.

We can't solve user data integrity at the lowest level: data logic and
such isn't really visible here. But we should be able to integrate HLL
locking with our internal needs, so that the former doesn't cause
deadlocks[3] because we have to lock internally too, and we should be able
to omit internal locking, if HLL locking code already provides this
safety for a specific PMC.

The final strategy when to lock what depends on the system the code is
running. Python's model is fine for single processors. Fine grained PMC
locking gives more boost on multi-processor machines.

All generalization is evil and 47.4 +- 1.1% of all
statistics^Wbenchmarks are wrong;)

leo

[1] BLOCK scoped, e.g. synchronized {... } or { lock $x; ... }
    These can be rwlocks or mutex typed locks
[2] lock.aquire, lock.release

[3] not user caused deadlocks - the mix of e.g. one user lock and
    internal locking.

> -Melvin

leo

Reply via email to