Hi All,
Does anyone know how to transfer data buffer through Hypercall?
According to the current implementation from kvm_emulate_hypercall,
it only takes a primitive type as parameters through different
registers. Can we use hyprecall like read/write system call to
transfer data between guest and
Hi Paolo,
Thanks a lot!
On Fri, Jun 19, 2015 at 2:27 AM, Paolo Bonzini pbonz...@redhat.com wrote:
On 19/06/2015 03:52, Hu Yaohui wrote:
Hi All,
In kernel 3.14.2, the kvm uses shadow EPT(EPT02) to implement the
nested EPT. The shadow EPT(EPT02) is a shadow of guest EPT (EPT12). If
the L1
to the source code, each allocated
shadow page struct kvm_mmu_page got it's gfn field filled.
Thanks,
Yaohui
On Fri, Jun 19, 2015 at 11:23 AM, Paolo Bonzini pbonz...@redhat.com wrote:
On 19/06/2015 14:44, Hu Yaohui wrote:
Hi Paolo,
Thanks a lot!
On Fri, Jun 19, 2015 at 2:27 AM, Paolo Bonzini
Hi All,
In kernel 3.14.2, the kvm uses shadow EPT(EPT02) to implement the
nested EPT. The shadow EPT(EPT02) is a shadow of guest EPT (EPT12). If
the L1 guest writes to the guest EPT(EPT12). How can the shadow
EPT(EPT02) be modified according?
Thanks,
Yaohui
--
To unsubscribe from this list: send
Hi all,
There are some vmfork features provided by Xen based on shadow page
table few years ago. I am wondering whether KVM provides the similar
feature on the same host.
By triggering vmfork, we can get a child VM which CPU and I/O status
is the same as the parent, and the memory is CoW shared
Hi All,
Is direct device assignment in nested VM supported in the latest KVM
mainline now?
Thanks,
Yaohui
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Hi,
I have one question related to nested EPT page fault.
At the very start, L0 hypervisor launches L2 with an empty EPT0-2
table, building the table on-the-fly.
when one L2 physical page is accessed, ept_page_fault(paging_tmpl.h)
will be called to handle this fault in L0. which will first call
Hi Abel,
Thanks a lot! It works now.
Best Wishes,
Yaohui
On Sun, May 4, 2014 at 10:57 AM, Abel Gordon a...@stratoscale.com wrote:
On Fri, May 2, 2014 at 11:11 PM, Hu Yaohui loki2...@gmail.com wrote:
On Fri, May 2, 2014 at 2:39 PM, Bandan Das b...@redhat.com wrote:
Hu Yaohui loki2
shadow pages
for mmio generation wraparound
May 2 11:12:32 o46 kernel: [38100.616392] nested_vmx_exit_handled
failed vm entry 7
/log
Thanks,
Yaohui
On Fri, May 2, 2014 at 5:53 AM, Paolo Bonzini pbonz...@redhat.com wrote:
Il 02/05/2014 03:43, Hu Yaohui ha scritto:
Hi all,
I have a problem running
On Fri, May 2, 2014 at 11:52 AM, Paolo Bonzini pbonz...@redhat.com wrote:
Il 02/05/2014 17:17, Hu Yaohui ha scritto:
Hi Paolo,
I have tried L0 with linux kernel 3.14.2 and L1 with linux kernel 3.14.2
L1 QEMU qemu-1.7.0
L2 QEMU qemu-1.7.0.
Do you mean L0 and L1?
Yes.
What is your QEMU
On Fri, May 2, 2014 at 2:39 PM, Bandan Das b...@redhat.com wrote:
Hu Yaohui loki2...@gmail.com writes:
On Fri, May 2, 2014 at 11:52 AM, Paolo Bonzini pbonz...@redhat.com wrote:
Il 02/05/2014 17:17, Hu Yaohui ha scritto:
Hi Paolo,
I have tried L0 with linux kernel 3.14.2 and L1 with linux
Hi all,
I have a problem running the latest version of kvm with nested configuration.
I used to run it with kernel 3.2.2 both for L0 and L1, which works perfectly.
When I change my L0 to kernel 3.10.36, L1 to kernel 3.12.10.
When I start L2 guest in L1 with qemu-kvm. I get the following error
from
Hi Guangrong,
Since you have written in the kvm/mmu.txt.
quote
unsync:
If true, then the translations in this page may not match the guest's
translation. This is equivalent to the state of the tlb when a pte is
changed but before the tlb entry is flushed. Accordingly, unsync ptes
on it.
Thanks for your time!
Best Wishes,
Yaohui
On Tue, Mar 25, 2014 at 12:25 PM, Hu Yaohui loki2...@gmail.com wrote:
Hi Guangrong,
Since you have written in the kvm/mmu.txt.
quote
unsync:
If true, then the translations in this page may not match the guest's
translation
:
On 03/26/2014 12:40 PM, Hu Yaohui wrote:
Hi all,
I hope you have a good day!
I have debugged the code myself. I have called dump_stack() in
function __kvm_unsync_page
and function invlpg. Actually every time before invlpg is called,
the page fault handled will call __kvm_unsync_page before
Hi All,
If the host system decides that it wants to push a given page out to
swap, the host will notify the host through registered mmu notifier to
inform the guest. I am wondering if there any other situations, other
than swapping, which will trigger the mmu notifier to inform host page
change to
Hi Juan,
What's that about?
Best Wishes,
Yaohui Hu
On Tue, Jan 14, 2014 at 10:52 AM, Juan Quintela quint...@redhat.com wrote:
Hi
Please, send any topic that you are interested in covering.
Thanks, Juan.
Call details:
09:00 AM to 10:00 AM EDT
Every two weeks
If you need phone number
Thank you Marcelo!
I really appreciate your explanation.
On Sat, Jan 11, 2014 at 7:27 AM, Marcelo Tosatti mtosa...@redhat.com wrote:
On Thu, Jan 09, 2014 at 03:08:25PM -0500, Hu Yaohui wrote:
Hi Marcelo,
Thanks for your replying!
I hope you have a good day! I am sorry that it's
, what will
happen? Thanks for your time!
Best Wishes,
Yaohui Hu
On Wed, Jan 8, 2014 at 6:35 PM, Hu Yaohui loki2...@gmail.com wrote:
Thanks a lot Marcelo!
On Wed, Jan 8, 2014 at 6:25 PM, Marcelo Tosatti mtosa...@redhat.com wrote:
On Wed, Jan 08, 2014 at 06:14:15PM -0500, Hu Yaohui wrote:
Hi guys
Hi all.
If the hardware does not support APIC virtualization: kvm_vcpu_kick sends an
host-IPI to the remote vcpu if that vcpu is in guest mode, a VM-exit
(exit reason: external interrupt) will be triggered due to the host-IPI.
Then on VM-entry (inject_pending_event) the guest-IPI is injected. If
Hi,
I hope you have a good day! I have a question regarding Guest TLB flush IPIs.
quote
If the hardware does not support APIC virtualization: kvm_vcpu_kick sends an
host-IPI to the remote vcpu if that vcpu is in guest mode, a VM-exit
(exit reason: external interrupt) will be triggered due to the
Thanks a lot Marcelo!
On Thu, Jan 9, 2014 at 1:46 PM, Marcelo Tosatti mtosa...@redhat.com wrote:
On Wed, Jan 08, 2014 at 06:35:00PM -0500, Hu Yaohui wrote:
Thanks a lot Marcelo!
On Wed, Jan 8, 2014 at 6:25 PM, Marcelo Tosatti mtosa...@redhat.com wrote:
On Wed, Jan 08, 2014 at 06:14:15PM
for your time!
Best Wishes,
Yaohui Hu
On Thu, Jan 9, 2014 at 1:47 PM, Marcelo Tosatti mtosa...@redhat.com wrote:
On Thu, Jan 09, 2014 at 11:28:40AM -0500, Hu Yaohui wrote:
Hi Macelo,
I am sorry to bother you again. In your first possibility,
kvm_vcpu_kick sends an host-IPI to the remote vcpu
Hi All,
I have a question regarding Guest TLB flush IPI. Supposed we get two
vcpus 0 and 1.
When vcpu#0 wants to invalidate the tlb entry on vcpu#1. An IPI will
be generated by lapic on vcpu#0 by writing to ICR which will cause a
vmexit.
Hi guys,
I think you should be pretty familiar with lapic. I would really
appreciate it if someone could shed some lights on my problem
regarding Guest TLB flush IPI.
Supposed we get two vcpus 0 and 1.
When vcpu#0 wants to invalidate the tlb entry on vcpu#1. An IPI will
be generated by lapic on
Thanks a lot Marcelo!
On Wed, Jan 8, 2014 at 6:25 PM, Marcelo Tosatti mtosa...@redhat.com wrote:
On Wed, Jan 08, 2014 at 06:14:15PM -0500, Hu Yaohui wrote:
Hi guys,
I think you should be pretty familiar with lapic. I would really
appreciate it if someone could shed some lights on my problem
Hi,
I am looking at the source code of KVM. I am very curious how the IPI
is emulated between different vcpus. I found that before one vcpu is
about to send the IPIs to other vcpu. The received side will always
exit with reason 12 (EXIT_REASON_HLT) first, then the send side send
the IPI. after
27 matches
Mail list logo