On Sun, Aug 21, 2011 at 09:39:14PM +0200, Cherry G. Mathew wrote:
>     JM> An alternative would be to have per-CPU xpq_queue[] also. This
>     JM> is not completely stupid, xpq_queue is meant as a way to put
>     JM> multiple MMU operations in a queue asynchronously before issuing
>     JM> only one hypercall to handle them all.
> 
> This is slightly more complicated than it appears. Some of the "ops" in
> a per-cpu queue may have ordering dependencies with other cpu queues,
> and I think this would be hard to express trivially. (an example would
> be a pte update on one queue, and reading the same pte read on another
> queue - these cases are quite analogous (although completely unrelated)

read don't go through the xpq queue, don't they ?
I think this is similar to a tlb flush but the other way round,
I guess we could use a IPI for this too.

> 
> I'm thinking that it might be easier and more justifiable to nuke the
> current queue scheme and implement shadow page tables, which would fit
> more naturally and efficiently with CAS pte updates, etc.

I'm not sure this would completely fis the issue: with shadow page tables
you can't use a CAS to assure atomic operation with the hardware TLB, as
this is, precisely, a shadow PT and not the one used by hardware.

-- 
Manuel Bouyer <bou...@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--

Reply via email to