On Thu, Feb 21, 2008 at 05:52:25PM +0200, Avi Kivity wrote:
> Marcelo Tosatti wrote:
> >Batch pte updates and tlb flushes in lazy MMU mode.
> >
> >v1->v2:
> >- report individual hypercall error code, have multicall return number of 
> >processed entries.
> >- cover entire multicall duration with slots_lock instead of 
> >acquiring/reacquiring.
> >  
> 
> But not all hypercalls want slots_lock.

But slots_lock is required for kvm_read_guest() (and write_guest()). So
even if the underlying hypercall handling does not require slots_lock,
it still makes sense.

> I suggested earlier switching to a "multiple mmu operation" hypercall 
> (and not have individual mmu hypercalls). What do think about that?

We need to support different MMU operations in the same multicall, for
example normal pte writes, masked pte updates and tlb flushes. So an
array of

"operation, parameters"

is required.

> I think hypercalls will be quite diverse in the future and batching them 
> will not make sense always.

I agree. While the infrastructure is generic allowing batching of any
kind of hypercall, we explicitly select which ones can be batched (in
the guest):

static int can_defer_hypercall(struct kvm_para_state *state, unsigned int nr)
{
        if (state->mode == PARAVIRT_LAZY_MMU) {
                switch (nr) {
                case KVM_HYPERCALL_MMU_WRITE:
                case KVM_HYPERCALL_FLUSH_TLB:
                        return 1;
                }
        }
        return 0;
}

Perhaps you want to move that enforcement to the host.

This allows batching of future hypercalls (if appropriate) to be easy.


-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
_______________________________________________
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel

Reply via email to