On Sep 13, 2007, at 10:12 AM, Martin v. Löwis wrote: >> What do you think? > > I think what you are describing is the situation of today, > except in a less-performant way. The kernel *already* > implements such a "synchronization server", except that > all CPUs can act as such. You write > > "Since we are guaranteeing that synchronized code is running on a > single > core, it is the equivalent of a lock at the cost of a context switch." > > This is precisely what a lock costs today: a context switch. >
Really? Wouldn't we save some memory allocation overhead (since in my design, the "lock" is a really just simple kernel instruction as opposed to a full blown object) thereby lowering lock overhead (and allowing us to go with finer-grained "locks"? Since we're using an asynch message queue for the synch-server, it sounds like a standard lock-free algorithm. > Since the Python interpreter is synchronized all of the time, it > would completely run on the synchronization server all of the > time. As you identify, that single CPU might get overloaded, so > your scheme would give no benefits (since Python code could never > run in parallel), I think I neglected to mention that the locking would still need to be more fine grained - perhaps only do the context switch around refcounts (and the other places where the GIL is critical). If we can do this in a way that allows simple list comprehensions to run in parallel, that would be really helpful (like a truly parallel map function). > and only disadvantages (since multiple Python > interpreters today can run on multiple CPUs, but could not > anymore under your scheme). > Well, you could still run python code in parallel if you used multiple processes (each process having its own 'synchronization server'). Is that what you meant? On Sep 13, 2007, at 12:38 PM, Justin Tulloss wrote: > > What do you think? > > I'm going to have to agree with Martin here, although I'm not sure > I understand what you're saying entirely. Perhaps if you explained > where the benefits of this approach come from, it would clear up > what you're thinking. Well, my interpretation of the current problem is that removing the GIL has not been productive because of problems with lock contention on multi-core machines. Naturally, we need to make the locking more fine-grained to resolve this. Hopefully we can do so in a way that does not increase the lock overhead (hence my suggestion for a lock free approach using an asynch queue and a core as dedicated server). If we can somehow guarantee all GC operations (which is why the GIL is needed in the first place) run on a single core, we get locking for free without actually having to have threads spinning. regards, Prateek _______________________________________________ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com