Hello,
This RFC patch series provides facility to dedicate CPUs to KVM guests
and enable the guests to handle interrupts from passed-through PCI devices
directly (without VM exit and relay by the host).
With this feature, we can improve throughput and response time of the device
and the host's
Split memory hotplug function from cpu_up() as cpu_memory_up(), which will
be used for assigning memory area to off-lined cpus at following patch
in this series.
Signed-off-by: Tomoki Sekiyama tomoki.sekiyama...@hitachi.com
Cc: Avi Kivity a...@redhat.com
Cc: Marcelo Tosatti mtosa...@redhat.com
Add a facility of using offlined CPUs as slave CPUs. Slave CPUs are
specialized to exclusively run functions specified by online CPUs,
which do not run user processes.
To use this feature, build the kernel with CONFIG_SLAVE_CPU=y.
A slave CPU is launched by calling cpu_slave_up() when the CPU is
Enable virtualization when slave CPUs are activated, and disable when
the CPUs are dying using slave CPU notifier call chain.
In x86, TSC kHz must also be initialized by tsc_khz_changed when the
new slave CPUs are activated.
Signed-off-by: Tomoki Sekiyama tomoki.sekiyama...@hitachi.com
Cc: Avi
Add path to migrate execution of vcpu_enter_guest to a slave CPU when
vcpu-arch.slave_cpu is set.
After moving to the slave CPU, it goes back to the online CPU when the
guest is exited by reasons that cannot be handled by the slave CPU only
(e.g. handling async page faults).
On migration,
Add an interface to set/get slave CPU dedicated to the vCPUs.
By calling ioctl with KVM_GET_SLAVE_CPU, users can get the slave CPU id
for the vCPU. -1 is returned if a slave CPU is not set.
By calling ioctl with KVM_SET_SLAVE_CPU, users can dedicate the specified
slave CPU to the vCPU. The CPU
If the slave CPU receives an interrupt in running a guest, current
implementation must once go back to onilne CPUs to handle the interupt.
This behavior will be replaced by later patch, which introduces direct
interrupt handling mechanism by the guest.
Signed-off-by: Tomoki Sekiyama
Replace local_irq_disable/enable with local_irq_save/restore in the path
where is executed on slave CPUs. This is required because irqs are disabled
while the guest is running on the slave CPUs.
Signed-off-by: Tomoki Sekiyama tomoki.sekiyama...@hitachi.com
Cc: Avi Kivity a...@redhat.com
Cc:
Avoid exiting from a guest on slave CPU even if HLT instruction is
executed. Since the slave CPU is dedicated to a vCPU, exit on HLT is
not required, and avoiding VM exit will improve the guest's performance.
This is a partial revert of
10166744b80a (KVM: VMX: remove yield_on_hlt)
Cc:
Add a facility to use IRQ vector different from online CPUs on slave CPUs.
When alternative vector for IRQ is registered by remap_slave_vector_irq()
and the IRQ affinity is set only to slave CPUs, the device is configured
to use the alternative vector.
Current patch only supports MSI and Intel
Enable APIC to handle interrupts on slave CPUs, and enables interrupt
routing to slave CPUs by setting IRQ affinity.
As slave CPUs which run a KVM guest handle external interrupts directly in
the vCPUs, the guest's vector/IRQ mapping is different from the host's.
That requires interrupts to be
Add some definitions to use PIN_BASED_PREEMPTION_TIMER.
When PIN_BASED_PREEMPTION_TIMER is enabled, the guest will exit
with reason=EXIT_REASON_PREEMPTION_TIMER when the counter specified in
VMX_PREEMPTION_TIMER_VALUE becomes 0.
This patch also adds a dummy handler for
Add some fix-ups that proxy slab operations on online CPUs for the guest,
in order to avoid touching slab on slave CPUs where some slab functions
are not activated.
Currently, slab may be touched on slave CPUs in following 3 cases.
For each cases, the fix-ups below are introduced:
*
Page faults which occured by the guest running on slave CPUs cannot be
handled on slave CPUs because it is running on idle process context.
With this patch, the page fault happened in a slave CPU is notified to
online CPU using struct kvm_access_fault, and is handled after the
user-process for
Adds a facility to use hrtimer on slave CPUs.
To initialize hrtimer when slave CPUs are activated, and to shutdown hrtimer
when slave CPUs are stopped, this patch adds the slave cpu notifier chain,
which call registered callbacks when slave CPUs are up, dying, and died.
The registered callbacks
When a PCI device is assigned to a guest running on slave CPUs, this
routes the device's MSI/MSI-X interrupts directly to the guest.
Because the guest uses a different interrupt vector from the host,
vector remapping is required. This is safe because slave CPUs only handles
interrupts for the
For slave CPUs, it is inapropriate to request TLB flush using IPI.
because the IPI may be sent to a KVM guest when the slave CPU is running
the guest with direct interrupt routing.
Instead, it registers a TLB flush request in per-cpu bitmask and send a NMI
to interrupt execution of the guest.
Make interrupts on slave CPUs handled by guests without VM EXIT.
This reduces CPU usage by the host to transfer interrupts of assigned
PCI devices from the host to guests. It also reduces cost of VM EXIT
and quickens response of guests to the interrupts.
When a slave CPU is dedicated to a vCPU,
Since NMI can not be disabled around VM enter, there is a race between
receiving NMI to kick a guest and entering the guests on slave CPUs.If the
NMI is received just before entering VM, after the NMI handler is invoked,
it continues entering the guest and the effect of the NMI will be lost.
This
This patch adds the watchdog emulation in KVM. The watchdog
emulation is enabled by KVM_ENABLE_CAP(KVM_CAP_PPC_WDT) ioctl.
The kernel timer are used for watchdog emulation and emulates
h/w watchdog state machine. On watchdog timer expiry, it exit to QEMU
if TCR.WRC is non ZERO. QEMU can
On Thu, Jun 28, 2012 at 01:31:29AM +0300, Michael S. Tsirkin wrote:
On Wed, Jun 27, 2012 at 04:04:18PM -0600, Alex Williamson wrote:
On Wed, 2012-06-27 at 18:26 +0300, Michael S. Tsirkin wrote:
On Tue, Jun 26, 2012 at 11:09:46PM -0600, Alex Williamson wrote:
@@ -71,6 +130,14 @@
The count variable is unsigned here so the test for errors doesn't
work.
Signed-off-by: Dan Carpenter dan.carpen...@oracle.com
diff --git a/drivers/vfio/pci/vfio_pci_config.c
b/drivers/vfio/pci/vfio_pci_config.c
index a4f7321..10bc6a8 100644
--- a/drivers/vfio/pci/vfio_pci_config.c
+++
On Wed, Jun 27, 2012 at 01:23:23PM -0600, Alex Williamson wrote:
On Wed, 2012-06-27 at 15:37 +0300, Dan Carpenter wrote:
On Mon, Jun 25, 2012 at 10:55:52PM -0600, Alex Williamson wrote:
Hi,
VFIO has been kicking around for well over a year now and has been
posted numerous times for
In vfio_pci_ioctl() there is a potential integer underflow where we
might allocate less data than intended. We check that hdr.count is not
too large, but we don't check whether it is negative:
drivers/vfio/pci/vfio_pci.c
312 if (hdr.argsz - minsz hdr.count * size ||
313
This ioctl function is supposed to return a negative error code or zero
on success. copy_to_user() returns zero or the number of bytes
remaining to be copied.
Signed-off-by: Dan Carpenter dan.carpen...@oracle.com
diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
index 457acf3..1aa373f
Signed-off-by: Guo Chao y...@linux.vnet.ibm.com
---
arch/x86/kvm/vmx.c |6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 32eb588..7c40477 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -1325,7 +1325,7 @@ static
Signed-off-by: Guo Chao y...@linux.vnet.ibm.com
---
arch/x86/kvm/svm.c |4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index f75af40..0d20bdd 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -2063,7 +2063,7 @@ static
The count variable needs to be signed here because we use it to store
negative error codes.
Signed-off-by: Dan Carpenter dan.carpen...@oracle.com
---
v2: Just declare count as signed.
diff --git a/drivers/vfio/pci/vfio_pci_config.c
b/drivers/vfio/pci/vfio_pci_config.c
index a4f7321..2e00aa8
On 2012-06-28 03:15, Wen Congyang wrote:
At 06/27/2012 10:39 PM, Jan Kiszka Wrote:
On 2012-06-27 09:02, Wen Congyang wrote:
When the guest is panicked, it will write 0x1 to the port KVM_PV_PORT.
So if qemu reads 0x1 from this port, we can do the folloing three
things according to the
On Wed, Jun 27, 2012 at 09:52:52PM -0600, Alex Williamson wrote:
On Thu, 2012-06-28 at 01:28 +0300, Michael S. Tsirkin wrote:
On Wed, Jun 27, 2012 at 03:28:19PM -0600, Alex Williamson wrote:
On Thu, 2012-06-28 at 00:14 +0300, Michael S. Tsirkin wrote:
On Wed, Jun 27, 2012 at 02:59:09PM
On Thu, Jun 28, 2012 at 09:34:31AM +0300, Gleb Natapov wrote:
On Thu, Jun 28, 2012 at 01:31:29AM +0300, Michael S. Tsirkin wrote:
On Wed, Jun 27, 2012 at 04:04:18PM -0600, Alex Williamson wrote:
On Wed, 2012-06-27 at 18:26 +0300, Michael S. Tsirkin wrote:
On Tue, Jun 26, 2012 at
On Thu, Jun 28, 2012 at 11:34:35AM +0300, Michael S. Tsirkin wrote:
On Thu, Jun 28, 2012 at 09:34:31AM +0300, Gleb Natapov wrote:
On Thu, Jun 28, 2012 at 01:31:29AM +0300, Michael S. Tsirkin wrote:
On Wed, Jun 27, 2012 at 04:04:18PM -0600, Alex Williamson wrote:
On Wed, 2012-06-27 at
On Wed, Jun 27, 2012 at 04:24:30PM +0200, Cornelia Huck wrote:
On Tue, 26 Jun 2012 23:09:04 -0600
Alex Williamson alex.william...@redhat.com wrote:
Prune this down to just the struct kvm_irqfd so we can avoid
changing function definition for every flag or field we use.
Signed-off-by:
On Thu, Jun 28, 2012 at 11:35:41AM +0300, Gleb Natapov wrote:
On Thu, Jun 28, 2012 at 11:34:35AM +0300, Michael S. Tsirkin wrote:
On Thu, Jun 28, 2012 at 09:34:31AM +0300, Gleb Natapov wrote:
On Thu, Jun 28, 2012 at 01:31:29AM +0300, Michael S. Tsirkin wrote:
On Wed, Jun 27, 2012 at
On Wed, Jun 27, 2012 at 08:33:40AM -0600, Alex Williamson wrote:
On Wed, 2012-06-27 at 12:58 +0300, Michael S. Tsirkin wrote:
On Tue, Jun 26, 2012 at 11:08:52PM -0600, Alex Williamson wrote:
Ok, let's see how this flies. I actually quite like this, so be
gentle tearing it apart ;)
On Wed, Jun 27, 2012 at 12:19 PM, Marcelo Tosatti mtosa...@redhat.com wrote:
On Wed, Jun 27, 2012 at 08:41:49AM +0100, Stefan Hajnoczi wrote:
On Wed, Jun 27, 2012 at 8:39 AM, Stefan Hajnoczi stefa...@gmail.com wrote:
On Tue, Jun 26, 2012 at 8:34 PM, Marcelo Tosatti mtosa...@redhat.com
On Thu, Jun 28, 2012 at 11:41:05AM +0300, Michael S. Tsirkin wrote:
On Thu, Jun 28, 2012 at 11:35:41AM +0300, Gleb Natapov wrote:
On Thu, Jun 28, 2012 at 11:34:35AM +0300, Michael S. Tsirkin wrote:
On Thu, Jun 28, 2012 at 09:34:31AM +0300, Gleb Natapov wrote:
On Thu, Jun 28, 2012 at
On Thu, Jun 28, 2012 at 11:46:11AM +0300, Gleb Natapov wrote:
On Thu, Jun 28, 2012 at 11:41:05AM +0300, Michael S. Tsirkin wrote:
On Thu, Jun 28, 2012 at 11:35:41AM +0300, Gleb Natapov wrote:
On Thu, Jun 28, 2012 at 11:34:35AM +0300, Michael S. Tsirkin wrote:
On Thu, Jun 28, 2012 at
On Thu, Jun 28, 2012 at 11:48:40AM +0300, Michael S. Tsirkin wrote:
On Thu, Jun 28, 2012 at 11:46:11AM +0300, Gleb Natapov wrote:
On Thu, Jun 28, 2012 at 11:41:05AM +0300, Michael S. Tsirkin wrote:
On Thu, Jun 28, 2012 at 11:35:41AM +0300, Gleb Natapov wrote:
On Thu, Jun 28, 2012 at
On Thu, 28 Jun 2012 11:38:57 +0300
Michael S. Tsirkin m...@redhat.com wrote:
On Wed, Jun 27, 2012 at 04:24:30PM +0200, Cornelia Huck wrote:
On Tue, 26 Jun 2012 23:09:04 -0600
Alex Williamson alex.william...@redhat.com wrote:
Prune this down to just the struct kvm_irqfd so we can avoid
On 27.06.2012 18:54, Jan Kiszka wrote:
On 2012-06-27 17:39, Peter Lieven wrote:
Hi all,
i debugged this further and found out that kvm-kmod-3.0 is working with
qemu-kvm-1.0.1 while kvm-kmod-3.3 and kvm-kmod-3.4 are not. What is
working as well is kvm-kmod-3.4 with an old userspace
On 2012-06-28 11:11, Peter Lieven wrote:
On 27.06.2012 18:54, Jan Kiszka wrote:
On 2012-06-27 17:39, Peter Lieven wrote:
Hi all,
i debugged this further and found out that kvm-kmod-3.0 is working with
qemu-kvm-1.0.1 while kvm-kmod-3.3 and kvm-kmod-3.4 are not. What is
working as well is
On 28.06.2012 11:21, Jan Kiszka wrote:
On 2012-06-28 11:11, Peter Lieven wrote:
On 27.06.2012 18:54, Jan Kiszka wrote:
On 2012-06-27 17:39, Peter Lieven wrote:
Hi all,
i debugged this further and found out that kvm-kmod-3.0 is working with
qemu-kvm-1.0.1 while kvm-kmod-3.3 and kvm-kmod-3.4
On Thu, Jun 28, 2012 at 11:03:16AM +0200, Cornelia Huck wrote:
On Thu, 28 Jun 2012 11:38:57 +0300
Michael S. Tsirkin m...@redhat.com wrote:
On Wed, Jun 27, 2012 at 04:24:30PM +0200, Cornelia Huck wrote:
On Tue, 26 Jun 2012 23:09:04 -0600
Alex Williamson alex.william...@redhat.com
does anyone know whats that here in handle_mmio?
/* hack: Red Hat 7.1 generates these weird accesses. */
if ((addr 0xa-4 addr = 0xa) kvm_run-mmio.len == 3)
return 0;
thanks,
peter
On 28.06.2012 11:31, Peter Lieven wrote:
On 28.06.2012 11:21, Jan Kiszka wrote:
On
On 2012-06-28 11:31, Peter Lieven wrote:
On 28.06.2012 11:21, Jan Kiszka wrote:
On 2012-06-28 11:11, Peter Lieven wrote:
On 27.06.2012 18:54, Jan Kiszka wrote:
On 2012-06-27 17:39, Peter Lieven wrote:
Hi all,
i debugged this further and found out that kvm-kmod-3.0 is working with
于 2012年06月28日 03:22, Greg KH 写道:
On Wed, Jun 27, 2012 at 04:54:54PM +0800, Yanfei Zhang wrote:
This patch export offsets of fields via /sys/devices/cpu/vmcs/.
Individual offsets are contained in subfiles named by the filed's
encoding, e.g.: /sys/devices/cpu/vmcs/0800
Signed-off-by:
On 28.06.2012 11:39, Jan Kiszka wrote:
On 2012-06-28 11:31, Peter Lieven wrote:
On 28.06.2012 11:21, Jan Kiszka wrote:
On 2012-06-28 11:11, Peter Lieven wrote:
On 27.06.2012 18:54, Jan Kiszka wrote:
On 2012-06-27 17:39, Peter Lieven wrote:
Hi all,
i debugged this further and found out that
On 28.06.2012 11:39, Jan Kiszka wrote:
On 2012-06-28 11:31, Peter Lieven wrote:
On 28.06.2012 11:21, Jan Kiszka wrote:
On 2012-06-28 11:11, Peter Lieven wrote:
On 27.06.2012 18:54, Jan Kiszka wrote:
On 2012-06-27 17:39, Peter Lieven wrote:
Hi all,
i debugged this further and found out that
Hi guys, have any updates here?
On Sun, Jun 17, 2012 at 12:59 PM, Igor Laskovy igor.lask...@gmail.com wrote:
John, Jason, can you please concretely clarify what this bad things? For
example the worst-case.
Yoshi, Kei, can you please clarify current status of Kemari. How far it is
from
On Thu, Jun 28, 2012 at 05:54:30PM +0800, Yanfei Zhang wrote:
于 2012年06月28日 03:22, Greg KH 写道:
On Wed, Jun 27, 2012 at 04:54:54PM +0800, Yanfei Zhang wrote:
This patch export offsets of fields via /sys/devices/cpu/vmcs/.
Individual offsets are contained in subfiles named by the filed's
On Thu, 28 Jun 2012 12:34:43 +0300
Michael S. Tsirkin m...@redhat.com wrote:
On Thu, Jun 28, 2012 at 11:03:16AM +0200, Cornelia Huck wrote:
How about something like this as parameter for the new ioctl?
struct kvm_irqfd2 {
__u32 fd;
__u32 flags; /* for things like deassign */
On Thu, Jun 28, 2012 at 02:00:41PM +0200, Cornelia Huck wrote:
On Thu, 28 Jun 2012 12:34:43 +0300
Michael S. Tsirkin m...@redhat.com wrote:
On Thu, Jun 28, 2012 at 11:03:16AM +0200, Cornelia Huck wrote:
How about something like this as parameter for the new ioctl?
struct
On 06/27/2012 12:21 PM, Michael S. Tsirkin wrote:
On Tue, Jun 26, 2012 at 11:09:32PM -0600, Alex Williamson wrote:
We only know of one so far.
Signed-off-by: Alex Williamson alex.william...@redhat.com
Ugh. So we have a bug: we should have sanitized the fields.
If there's buggy userspace
On 06/28/2012 12:19 AM, Alex Williamson wrote:
@@ -302,6 +385,7 @@ kvm_irqfd_deassign(struct kvm *kvm, struct kvm_irqfd
*args)
{
struct _irqfd *irqfd, *tmp;
struct eventfd_ctx *eventfd;
+ bool is_level = (args-flags KVM_IRQFD_FLAG_LEVEL) != 0;
!= 0 is not needed here I
On 06/27/2012 08:10 AM, Alex Williamson wrote:
This is an alternate level irqfd de-assert mode that's potentially
useful for emulated drivers. It's included here to show how easy it
is to implement with the new level irqfd and eoifd support. It's
possible this mode might also prove
Hi,
i debugged my initial problem further and found out that the problem
happens to be that
the main thread is stuck in pause_all_vcpus() on reset or quit commands
in the monitor
if one cpu is stuck in the do-while loop kvm_cpu_exec. If I modify the
condition from while (ret == 0)
to while
On Wed, Jun 27, 2012 at 09:55:44PM -0600, Alex Williamson wrote:
On Wed, 2012-06-27 at 17:51 +0300, Gleb Natapov wrote:
On Wed, Jun 27, 2012 at 08:29:04AM -0600, Alex Williamson wrote:
On Wed, 2012-06-27 at 16:58 +0300, Gleb Natapov wrote:
On Tue, Jun 26, 2012 at 11:10:08PM -0600, Alex
On 2012-06-28 15:05, Peter Lieven wrote:
Hi,
i debugged my initial problem further and found out that the problem
happens to be that
the main thread is stuck in pause_all_vcpus() on reset or quit commands
in the monitor
if one cpu is stuck in the do-while loop kvm_cpu_exec. If I modify the
On Thu, Jun 28, 2012 at 04:11:40PM +0300, Michael S. Tsirkin wrote:
On Wed, Jun 27, 2012 at 09:55:44PM -0600, Alex Williamson wrote:
On Wed, 2012-06-27 at 17:51 +0300, Gleb Natapov wrote:
On Wed, Jun 27, 2012 at 08:29:04AM -0600, Alex Williamson wrote:
On Wed, 2012-06-27 at 16:58 +0300,
On 06/26/2012 02:34 PM, Marcelo Tosatti wrote:
On Sat, Jun 23, 2012 at 12:55:49AM +0200, Jan Kiszka wrote:
Should have declared this [RFC] in the subject and CC'ed kvm...
On 2012-06-23 00:45, Jan Kiszka wrote:
This sketches a possible path to get rid of the iothread lock on vmexits
in KVM
On 28.06.2012 15:25, Jan Kiszka wrote:
On 2012-06-28 15:05, Peter Lieven wrote:
Hi,
i debugged my initial problem further and found out that the problem
happens to be that
the main thread is stuck in pause_all_vcpus() on reset or quit commands
in the monitor
if one cpu is stuck in the do-while
On 06/28/2012 05:10 PM, Anthony Liguori wrote:
1. read_lock(memmap_lock)
2. MemoryRegionSection mrs = lookup(addr)
3. qom_ref(mrs.mr-dev)
4. read_unlock(memmap_lock)
5. mutex_lock(dev-lock)
6. dispatch(mrs, addr, data, size)
7. mutex_unlock(dev-lock)
Just a detail, I don't think
On 2012-06-28 17:02, Peter Lieven wrote:
On 28.06.2012 15:25, Jan Kiszka wrote:
On 2012-06-28 15:05, Peter Lieven wrote:
Hi,
i debugged my initial problem further and found out that the problem
happens to be that
the main thread is stuck in pause_all_vcpus() on reset or quit commands
in
On Tue, Jun 26, 2012 at 10:50:36AM +0800, Zhengwang Ruan wrote:
Hello,
I am freshman to this community, I want to know if there is a
mail-list in which I can ask questions against kvm and people are
willing to answer too? Thanks! :-)
Hi Zhengwang, welcome to KVM.
You can try IRC: #kvm
On 06/14/2012 05:04 AM, Mao, Junjie wrote:
This patch handles PCID/INVPCID for guests.
Process-context identifiers (PCIDs) are a facility by which a logical
processor
may cache information for multiple linear-address spaces so that the processor
may retain cached information when software
On 06/28/2012 06:49 PM, Avi Kivity wrote:
On 06/14/2012 05:04 AM, Mao, Junjie wrote:
This patch handles PCID/INVPCID for guests.
Process-context identifiers (PCIDs) are a facility by which a logical
processor
may cache information for multiple linear-address spaces so that the
processor
- Original Message -
In summary, current PV has huge benefit on non-PLE machine.
On PLE machine, the results become very sensitive to load, type of
workload and SPIN_THRESHOLD. Also PLE interference has significant
effect on them. But still it has slight edge over non PV.
Hi
On 06/27/2012 07:27 PM, Jan Kiszka wrote:
Instead of flushing pending coalesced MMIO requests on every vmexit,
this provides a mechanism to selectively flush when memory regions
related to the coalesced one are accessed. This first of all includes
the coalesced region itself but can also
On 06/27/2012 07:27 PM, Jan Kiszka wrote:
Changes in v2:
- added memory_region_clear_flush_coalesced
- call memory_region_clear_flush_coalesced from
memory_region_clear_coalescing
- wrap all region manipulations via memory_region_transaction_begin/
commit internally
- flush
On 06/28/2012 09:30 PM, Andrew Jones wrote:
- Original Message -
In summary, current PV has huge benefit on non-PLE machine.
On PLE machine, the results become very sensitive to load, type of
workload and SPIN_THRESHOLD. Also PLE interference has significant
effect on them. But still
On 06/24/2012 06:02 PM, Alex Williamson wrote:
On Sun, 2012-06-24 at 15:56 +0300, Avi Kivity wrote:
On 06/23/2012 01:16 AM, Alex Williamson wrote:
I think we're probably also going to need something like this.
When running in non-accelerated qemu, we're going to have to
create some kind of
On 28.06.2012 17:22, Jan Kiszka wrote:
On 2012-06-28 17:02, Peter Lieven wrote:
On 28.06.2012 15:25, Jan Kiszka wrote:
On 2012-06-28 15:05, Peter Lieven wrote:
Hi,
i debugged my initial problem further and found out that the problem
happens to be that
the main thread is stuck in
On 06/28/2012 07:29 PM, Peter Lieven wrote:
Yes. A signal is sent, and KVM returns from the guest to userspace on
pending signals.
is there a description available how this process exactly works?
The kernel part is in vcpu_enter_guest(), see the check for
signal_pending(). But this hasn't
On 06/28/2012 09:08 AM, Tomoki Sekiyama wrote:
For slave CPUs, it is inapropriate to request TLB flush using IPI.
because the IPI may be sent to a KVM guest when the slave CPU is running
the guest with direct interrupt routing.
Instead, it registers a TLB flush request in per-cpu bitmask and
On 06/28/2012 09:08 AM, Tomoki Sekiyama wrote:
Since NMI can not be disabled around VM enter, there is a race between
receiving NMI to kick a guest and entering the guests on slave CPUs.If the
NMI is received just before entering VM, after the NMI handler is invoked,
it continues entering the
On Thu, 28 Jun 2012 15:09:49 +0300
Michael S. Tsirkin m...@redhat.com wrote:
On Thu, Jun 28, 2012 at 02:00:41PM +0200, Cornelia Huck wrote:
On Thu, 28 Jun 2012 12:34:43 +0300
Michael S. Tsirkin m...@redhat.com wrote:
On Thu, Jun 28, 2012 at 11:03:16AM +0200, Cornelia Huck wrote:
On Thu, Jun 28, 2012 at 05:08:04PM +0300, Gleb Natapov wrote:
On Thu, Jun 28, 2012 at 04:11:40PM +0300, Michael S. Tsirkin wrote:
On Wed, Jun 27, 2012 at 09:55:44PM -0600, Alex Williamson wrote:
On Wed, 2012-06-27 at 17:51 +0300, Gleb Natapov wrote:
On Wed, Jun 27, 2012 at 08:29:04AM
On Thu, Jun 28, 2012 at 06:51:09PM +0200, Cornelia Huck wrote:
On Thu, 28 Jun 2012 15:09:49 +0300
Michael S. Tsirkin m...@redhat.com wrote:
On Thu, Jun 28, 2012 at 02:00:41PM +0200, Cornelia Huck wrote:
On Thu, 28 Jun 2012 12:34:43 +0300
Michael S. Tsirkin m...@redhat.com wrote:
On 06/28/2012 09:07 AM, Tomoki Sekiyama wrote:
Hello,
This RFC patch series provides facility to dedicate CPUs to KVM guests
and enable the guests to handle interrupts from passed-through PCI devices
directly (without VM exit and relay by the host).
With this feature, we can improve
On 06/28/2012 09:07 AM, Tomoki Sekiyama wrote:
Add path to migrate execution of vcpu_enter_guest to a slave CPU when
vcpu-arch.slave_cpu is set.
After moving to the slave CPU, it goes back to the online CPU when the
guest is exited by reasons that cannot be handled by the slave CPU only
On Thu, 2012-06-28 at 19:27 +0300, Avi Kivity wrote:
On 06/24/2012 06:02 PM, Alex Williamson wrote:
On Sun, 2012-06-24 at 15:56 +0300, Avi Kivity wrote:
On 06/23/2012 01:16 AM, Alex Williamson wrote:
I think we're probably also going to need something like this.
When running in
On 2012-06-28 18:58, Avi Kivity wrote:
On 06/28/2012 09:07 AM, Tomoki Sekiyama wrote:
Hello,
This RFC patch series provides facility to dedicate CPUs to KVM guests
and enable the guests to handle interrupts from passed-through PCI devices
directly (without VM exit and relay by the host).
On 06/28/2012 08:26 PM, Jan Kiszka wrote:
This is both impressive and scary. What is the target scenario here?
Partitioning? I don't see this working for generic consolidation.
From my POV, partitioning - including hard realtime partitions - would
provide some use cases. But, as far as
On 06/28/2012 06:45 AM, Takuya Yoshikawa wrote:
On Thu, 28 Jun 2012 11:12:51 +0800
Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com wrote:
struct kvm_arch_memory_slot {
+ unsigned long *rmap_pde[KVM_NR_PAGE_SIZES - 1];
struct kvm_lpage_info *lpage_info[KVM_NR_PAGE_SIZES - 1];
};
On 06/28/2012 05:02 AM, Takuya Yoshikawa wrote:
When we invalidate a THP page, we call the handler with the same
rmap_pde argument 512 times in the following loop:
for each guest page in the range
for each level
unmap using rmap
This patch avoids these extra handler calls by
On 04/27/2012 09:23 PM, Gleb Natapov wrote:
On Fri, Apr 27, 2012 at 04:15:35PM +0530, Raghavendra K T wrote:
On 04/24/2012 03:29 PM, Gleb Natapov wrote:
On Mon, Apr 23, 2012 at 03:29:47PM +0530, Raghavendra K T wrote:
From: Srivatsa Vaddagiriva...@linux.vnet.ibm.com
KVM_HC_KICK_CPU allows
On Tue, Jun 26, 2012 at 11:10:08PM -0600, Alex Williamson wrote:
diff --git a/Documentation/virtual/kvm/api.txt
b/Documentation/virtual/kvm/api.txt
index b216709..87a2558 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -1987,6 +1987,30 @@ interrupts
On 2012/06/28 5:34, Sterling Windmill wrote:
Is Kemari still in active development?
No, it's not. Currently we have no intention to add new features into
Kemari.
Thanks,
Kei
Best regards,
Sterling Windmill
On Sun, Dec 4, 2011 at 9:45 PM, OHMURA Kei ohmura@lab.ntt.co.jp
On Thu, 2012-06-28 at 11:07 +0300, Dan Carpenter wrote:
The count variable needs to be signed here because we use it to store
negative error codes.
Signed-off-by: Dan Carpenter dan.carpen...@oracle.com
---
v2: Just declare count as signed.
diff --git a/drivers/vfio/pci/vfio_pci_config.c
On Thu, 2012-06-28 at 09:44 +0300, Dan Carpenter wrote:
In vfio_pci_ioctl() there is a potential integer underflow where we
might allocate less data than intended. We check that hdr.count is not
too large, but we don't check whether it is negative:
drivers/vfio/pci/vfio_pci.c
312
On Thu, 2012-06-28 at 09:45 +0300, Dan Carpenter wrote:
This ioctl function is supposed to return a negative error code or zero
on success. copy_to_user() returns zero or the number of bytes
remaining to be copied.
Signed-off-by: Dan Carpenter dan.carpen...@oracle.com
diff --git
On Fri, Jun 15, 2012 at 03:07:24PM -0400, Christoffer Dall wrote:
From: Marc Zyngier marc.zyng...@arm.com
In order to avoid compilation failure when KVM is not compiled in,
guard the mmu_notifier specific sections with both CONFIG_MMU_NOTIFIER
and KVM_ARCH_WANT_MMU_NOTIFIER, like it is being
On Fri, Jun 15, 2012 at 03:06:39PM -0400, Christoffer Dall wrote:
The following series implements KVM support for ARM processors,
specifically on the Cortex A-15 platform. Work is done in
collaboration between Columbia University, Virtual Open Systems and
ARM/Linaro.
The patch series
On Fri, Jun 15, 2012 at 03:08:22PM -0400, Christoffer Dall wrote:
From: Christoffer Dall cd...@cs.columbia.edu
This commit introduces the framework for guest memory management
through the use of 2nd stage translation. Each VM has a pointer
to a level-1 table (the pgd field in struct
On Fri, Jun 15, 2012 at 03:07:59PM -0400, Christoffer Dall wrote:
Sets up the required registers to run code in HYP-mode from the kernel.
By setting the HVBAR the kernel can execute code in Hyp-mode with
the MMU disabled. The HVBAR initially points to initialization code,
which initializes
Is there public documentation for hyp-mode available?
yes, you have to register on the ARM website
(http://infocenter.arm.com) but there you can download the ARM v7
architecture reference manual.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to
On Thu, Jun 28, 2012 at 6:34 PM, Marcelo Tosatti mtosa...@redhat.com wrote:
On Fri, Jun 15, 2012 at 03:08:22PM -0400, Christoffer Dall wrote:
From: Christoffer Dall cd...@cs.columbia.edu
This commit introduces the framework for guest memory management
through the use of 2nd stage translation.
On Thu, Jun 28, 2012 at 6:35 PM, Marcelo Tosatti mtosa...@redhat.com wrote:
On Fri, Jun 15, 2012 at 03:07:59PM -0400, Christoffer Dall wrote:
Sets up the required registers to run code in HYP-mode from the kernel.
By setting the HVBAR the kernel can execute code in Hyp-mode with
the MMU
Hello,
I am just catching up on this email thread...
Perhaps one of you may be able to help answer this query.. preferably along
with some data. [BTW, I do understand the basic intent behind PLE in a typical
[sweet spot] use case where there is over subscription etc. and the need to
1 - 100 of 118 matches
Mail list logo