Juan Quintela quint...@redhat.com wrote:
Hi
Please, send any topic that you are interested in covering.
People have complained on the past that I don't cancel the call until
the very last minute. So, what do you think that deadline for
submitting topics is 23:00UTC on Monday?
ok, no
Il 16/09/2014 04:06, Andrew Jones ha scritto:
We shouldn't try Load-Exclusive instructions unless we've enabled memory
management, as these instructions depend on the data cache unit's
coherency monitor. This patch adds a new setup boolean, initialized to false,
that is used to guard
Il 15/09/2014 20:14, Matt Mullins ha scritto:
On Tue, Sep 09, 2014 at 11:53:49PM -0700, Matt Mullins wrote:
On Mon, Sep 08, 2014 at 06:18:46PM +0200, Paolo Bonzini wrote:
What version of QEMU? Can you try the 12.04 qemu (which IIRC is 1.0) on
top of the newer kernel?
I did reproduce this on
Il 16/09/2014 00:49, Liang Chen ha scritto:
---
(And what about a possible followup patch that replaces
kvm_mmu_flush_tlb() with kvm_make_request() again?
It would free the namespace a bit and we could call something
similarly named from vcpu_enter_guest() to do the job.)
That seems
On September 12, 2014 at 7:29 PM Jan Kiszka jan.kis...@siemens.com wrote:
On 2014-09-12 19:15, Jan Kiszka wrote:
On 2014-09-12 14:29, Erik Rull wrote:
On September 11, 2014 at 3:32 PM Jan Kiszka jan.kis...@siemens.com
wrote:
On 2014-09-11 15:25, Erik Rull wrote:
On August 6, 2014
Il 02/09/2014 11:27, Will Deacon ha scritto:
The mpic, flic and xics are still not ported over, as I don't want to
risk breaking those devices
Actually FLIC is ported. :)
arch/s390/kvm/kvm-s390.c | 3 +-
arch/s390/kvm/kvm-s390.h | 1 +
include/linux/kvm_host.h | 4 +-
This patch only handle L1 and L2 vm share one apic access page situation.
When L1 vm is running, if the shared apic access page is migrated, mmu_notifier
will
request all vcpus to exit to L0, and reload apic access page physical address
for
all the vcpus' vmcs (which is done by patch 5/6). And
To make apic access page migratable, we do not pin it in memory now.
When it is migrated, we should reload its physical address for all
vmcses. But when we tried to do this, all vcpu will access
kvm_arch-apic_access_page without any locking. This is not safe.
Actually, we do not need
apic access page is pinned in memory. As a result, it cannot be
migrated/hot-removed.
Actually, it is not necessary to be pinned.
The hpa of apic access page is stored in VMCS APIC_ACCESS_ADDR pointer. When
the page is migrated, kvm_mmu_notifier_invalidate_page() will invalidate the
In init_rmode_identity_map(), there two variables indicating the return
value, r and ret, and it return 0 on error, 1 on success. The function
is only called by vmx_create_vcpu(), and r is redundant.
This patch removes the redundant variable r, and make init_rmode_identity_map()
return 0 on
ept identity pagetable and apic access page in kvm are pinned in memory.
As a result, they cannot be migrated/hot-removed.
But actually they don't need to be pinned in memory.
[For ept identity page]
Just do not pin it. When it is migrated, guest will be able to find the
new page in the next ept
kvm_arch-ept_identity_pagetable holds the ept identity pagetable page. But
it is never used to refer to the page at all.
In vcpu initialization, it indicates two things:
1. indicates if ept page is allocated
2. indicates if a memory slot for identity page is initialized
Actually,
We have APIC_DEFAULT_PHYS_BASE defined as 0xfee0, which is also the address
of
apic access page. So use this macro.
Signed-off-by: Tang Chen tangc...@cn.fujitsu.com
Reviewed-by: Gleb Natapov g...@kernel.org
---
arch/x86/kvm/svm.c | 3 ++-
arch/x86/kvm/vmx.c | 6 +++---
2 files changed, 5
Il 16/09/2014 12:42, Tang Chen ha scritto:
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 33712fb..0df82c1 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -210,6 +210,11 @@ void kvm_make_scan_ioapic_request(struct kvm *kvm)
make_all_cpus_request(kvm,
Il 16/09/2014 12:42, Tang Chen ha scritto:
This patch only handle L1 and L2 vm share one apic access page situation.
When L1 vm is running, if the shared apic access page is migrated,
mmu_notifier will
request all vcpus to exit to L0, and reload apic access page physical address
for
all
In init_rmode_tss(), there two variables indicating the return
value, r and ret, and it return 0 on error, 1 on success. The function
is only called by vmx_set_tss_addr(), and r is redundant.
This patch removes the redundant variable, by making init_rmode_tss()
return 0 on success, -errno on
Il 16/09/2014 12:41, Tang Chen ha scritto:
ept identity pagetable and apic access page in kvm are pinned in memory.
As a result, they cannot be migrated/hot-removed.
But actually they don't need to be pinned in memory.
[For ept identity page]
Just do not pin it. When it is migrated, guest
- Original Message -
Il 16/09/2014 04:06, Andrew Jones ha scritto:
We shouldn't try Load-Exclusive instructions unless we've enabled memory
management, as these instructions depend on the data cache unit's
coherency monitor. This patch adds a new setup boolean, initialized to
Using cpuid structs in KVM to eliminate cryptic code with many bit operations.
The code does not introduce functional changes.
Signed-off-by: Nadav Amit na...@cs.technion.ac.il
---
arch/x86/kvm/cpuid.c | 36 ++--
1 file changed, 22 insertions(+), 14 deletions(-)
The current code that decodes cpuid fields is somewhat cryptic, since it uses
many bit operations. Using cpuid structs instead for clarifying the code.
Introducing no functional change.
Signed-off-by: Nadav Amit na...@cs.technion.ac.il
---
arch/x86/kernel/cpu/common.c | 56
The code that deals with x86 cpuid fields is hard to follow since it performs
many bit operations and does not refer to cpuid field explicitly. To
eliminate the need of openning a spec whenever dealing with cpuid fields, this
patch-set introduces structs that reflect the various cpuid functions.
Adding structs that reflect various cpuid fields in x86 architecture. Structs
were added only for functions that are not pure bitmaps.
Signed-off-by: Nadav Amit na...@cs.technion.ac.il
---
arch/x86/include/asm/cpuid_def.h | 163 +++
1 file changed, 163
Il 16/09/2014 14:12, Andrew Jones ha scritto:
Should it at least write 1 to the spinlock?
I thought about that. So on one hand we might get a somewhat functional
synchronization mechanism, which may be enough for some unit test that
doesn't enable caches, but still needs it. On the other
- Original Message -
Il 16/09/2014 14:12, Andrew Jones ha scritto:
Should it at least write 1 to the spinlock?
I thought about that. So on one hand we might get a somewhat functional
synchronization mechanism, which may be enough for some unit test that
doesn't enable caches,
Il 16/09/2014 14:43, Andrew Jones ha scritto:
I don't think we need to worry about this case. AFAIU, enabling the
caches for a particular cpu shouldn't require any synchronization.
So we should be able to do
enable caches
spin_lock
start other processors
spin_unlock
Ok,
- Original Message -
- Original Message -
Il 16/09/2014 14:12, Andrew Jones ha scritto:
Should it at least write 1 to the spinlock?
I thought about that. So on one hand we might get a somewhat functional
synchronization mechanism, which may be enough for some
- Original Message -
Il 16/09/2014 14:43, Andrew Jones ha scritto:
I don't think we need to worry about this case. AFAIU, enabling the
caches for a particular cpu shouldn't require any synchronization.
So we should be able to do
enable caches
spin_lock
start
- Original Message -
Il 16/09/2014 14:43, Andrew Jones ha scritto:
I don't think we need to worry about this case. AFAIU, enabling the
caches for a particular cpu shouldn't require any synchronization.
So we should be able to do
enable caches
spin_lock
start
* Nadav Amit na...@cs.technion.ac.il wrote:
The code that deals with x86 cpuid fields is hard to follow since it performs
many bit operations and does not refer to cpuid field explicitly. To
eliminate the need of openning a spec whenever dealing with cpuid fields, this
patch-set introduces
Il 15/09/2014 22:11, Andres Lagar-Cavilla ha scritto:
+ if (!locked) {
+ BUG_ON(npages != -EBUSY);
VM_BUG_ON perhaps?
@@ -1177,9 +1210,15 @@ static int hva_to_pfn_slow(unsigned long addr, bool
*async, bool write_fault,
npages = get_user_page_nowait(current,
On 16 September 2014 01:10, Juan Quintela quint...@redhat.com wrote:
Juan Quintela quint...@redhat.com wrote:
Hi
Please, send any topic that you are interested in covering.
People have complained on the past that I don't cancel the call until
the very last minute. So, what do you think
Peter Maydell peter.mayd...@linaro.org wrote:
On 16 September 2014 01:10, Juan Quintela quint...@redhat.com wrote:
Juan Quintela quint...@redhat.com wrote:
Hi
Please, send any topic that you are interested in covering.
People have complained on the past that I don't cancel the call until
lkvm -i is currently broken on ARM/ARM64.
We should not try to convert smaller-than-4GB addresses into 64-bit
big endian and then stuff them into u32 variables if we expect to read
anything other than 0 out of it.
Adjust the type to u64 to write the proper address in BE format into
the /chosen
- Original Message -
- Original Message -
Il 16/09/2014 14:43, Andrew Jones ha scritto:
I don't think we need to worry about this case. AFAIU, enabling the
caches for a particular cpu shouldn't require any synchronization.
So we should be able to do
If virtio-blk and virtio-serial share an IRQ, the guest operating
system has to check each virtqueue for activity. Maybe there is some
inefficiency doing that.
AFAIK virtio-serial registers 64 virtqueues (on 31 ports +
console) even if everything is unused.
That could be the case if MSI is
On Tue, 16 Sep 2014 08:27:40 +0800
Amos Kong ak...@redhat.com wrote:
Set timeout to 10:
non-smp guest with quick backend (1.2M/s) - about 490K/s)
That sounds like an awful lot. This is a 60% loss in throughput.
I don't think we can live with that.
--
Michael
signature.asc
Description:
Amos Kong ak...@redhat.com writes:
On Sun, Sep 14, 2014 at 01:12:58AM +0800, Amos Kong wrote:
On Thu, Sep 11, 2014 at 09:08:03PM +0930, Rusty Russell wrote:
Amos Kong ak...@redhat.com writes:
When I check hwrng attributes in sysfs, cat process always gets
stuck if guest has only 1 vcpu
On Tue, Sep 16, 2014 at 9:52 AM, Andres Lagar-Cavilla
andre...@google.com wrote:
Apologies to all. Resend as lists rejected my gmail-formatted version.
Now on plain text. Won't happen again.
On Tue, Sep 16, 2014 at 6:51 AM, Paolo Bonzini pbonz...@redhat.com wrote:
Il 15/09/2014 22:11, Andres
On Tue, Sep 16, 2014 at 10:38:11AM -0400, Andrew Jones wrote:
- Original Message -
- Original Message -
Il 16/09/2014 14:43, Andrew Jones ha scritto:
I don't think we need to worry about this case. AFAIU, enabling the
caches for a particular cpu shouldn't
Il 16/09/2014 18:52, Andres Lagar-Cavilla ha scritto:
Was this:
down_read(mm-mmap_sem);
npages = get_user_pages(NULL, mm, addr, 1, 1, 0, NULL, NULL);
up_read(mm-mmap_sem);
the intention rather than get_user_pages_fast?
I meant the intention of the original author,
On Tue, Sep 16, 2014 at 11:29 AM, Paolo Bonzini pbonz...@redhat.com wrote:
Il 16/09/2014 18:52, Andres Lagar-Cavilla ha scritto:
Was this:
down_read(mm-mmap_sem);
npages = get_user_pages(NULL, mm, addr, 1, 1, 0, NULL, NULL);
up_read(mm-mmap_sem);
the intention
On 2014-09-16 15:37, Andre Przywara wrote:
lkvm -i is currently broken on ARM/ARM64.
We should not try to convert smaller-than-4GB addresses into 64-bit
big endian and then stuff them into u32 variables if we expect to
read
anything other than 0 out of it.
Adjust the type to u64 to write the
On 9/16/14 4:22 PM, Ingo Molnar wrote:
* Nadav Amit na...@cs.technion.ac.il wrote:
The code that deals with x86 cpuid fields is hard to follow since it performs
many bit operations and does not refer to cpuid field explicitly. To
eliminate the need of openning a spec whenever dealing
2014-09-15 13:11-0700, Andres Lagar-Cavilla:
+int kvm_get_user_page_retry(struct task_struct *tsk, struct mm_struct *mm,
The suffix '_retry' is not best suited for this.
On first reading, I imagined we will be retrying something from before,
possibly calling it in a loop, but we are actually
On Tue, Sep 16, 2014 at 1:51 PM, Radim Krčmář rkrc...@redhat.com wrote:
2014-09-15 13:11-0700, Andres Lagar-Cavilla:
+int kvm_get_user_page_retry(struct task_struct *tsk, struct mm_struct *mm,
The suffix '_retry' is not best suited for this.
On first reading, I imagined we will be retrying
On Tue, Sep 09, 2014 at 10:21:24AM +0800, Ethan Zhao wrote:
This patch set introduces three PCI device flag operation helper functions
when set pci device PF/VF to assigned or deassigned status also check it.
and patch 2,3,4 apply these helper functions to KVM,XEN and PCI.
v2: simplify
On Thu, Sep 11, 2014 at 9:24 PM, Andre Przywara andre.przyw...@arm.com wrote:
Hi Anup,
On 08/09/14 09:17, Anup Patel wrote:
Instead, of trying out each and every target type we should
use KVM_ARM_PREFERRED_TARGET vm ioctl to determine target type
for KVM ARM/ARM64.
If
On Thu, Sep 11, 2014 at 9:37 PM, Andre Przywara andre.przyw...@arm.com wrote:
Anup,
On 08/09/14 09:17, Anup Patel wrote:
The VCPU target type KVM_ARM_TARGET_XGENE_POTENZA is available
in latest Linux-3.16-rcX or higher hence register aarch64 target
type for it.
This patch enables us to run
On Thu, Sep 11, 2014 at 9:56 PM, Andre Przywara andre.przyw...@arm.com wrote:
On 08/09/14 09:17, Anup Patel wrote:
The KVM_EXIT_SYSTEM_EVENT exit reason was added to define
architecture independent system-wide events for a Guest.
Currently, it is used by in-kernel PSCI-0.2 emulation of
KVM
[Emergency posting to fix the tag and couldn't find unmangled Cc list,
so some recipients were dropped, sorry. (I guess you are glad though).]
2014-09-16 14:01-0700, Andres Lagar-Cavilla:
On Tue, Sep 16, 2014 at 1:51 PM, Radim Krčmář rkrc...@redhat.com wrote:
2014-09-15 13:11-0700, Andres
On Wed, Sep 17, 2014 at 3:43 AM, Anup Patel apa...@apm.com wrote:
On Thu, Sep 11, 2014 at 9:24 PM, Andre Przywara andre.przyw...@arm.com
wrote:
Hi Anup,
On 08/09/14 09:17, Anup Patel wrote:
Instead, of trying out each and every target type we should
use KVM_ARM_PREFERRED_TARGET vm ioctl to
On Wed, Sep 17, 2014 at 3:59 AM, Anup Patel apa...@apm.com wrote:
On Thu, Sep 11, 2014 at 9:56 PM, Andre Przywara andre.przyw...@arm.com
wrote:
On 08/09/14 09:17, Anup Patel wrote:
The KVM_EXIT_SYSTEM_EVENT exit reason was added to define
architecture independent system-wide events for a
On Wed, Sep 17, 2014 at 3:54 AM, Anup Patel apa...@apm.com wrote:
On Thu, Sep 11, 2014 at 9:37 PM, Andre Przywara andre.przyw...@arm.com
wrote:
Anup,
On 08/09/14 09:17, Anup Patel wrote:
The VCPU target type KVM_ARM_TARGET_XGENE_POTENZA is available
in latest Linux-3.16-rcX or higher hence
Hi Radim,
On Mon, Sep 15, 2014 at 09:33:52PM +0200, Radim Krčmář wrote:
2014-09-12 17:06-0400, Liang Chen:
Using kvm_mmu_flush_tlb as the other places to make sure vcpu
stat is incremented
Signed-off-by: Liang Chen liangchen.li...@gmail.com
---
Good catch.
arch/x86/kvm/vmx.c | 2 +-
1
Hi
在 08/01/2014 10:52 PM, Dr. David Alan Gilbert 写道:
* Yang Hongyang (yan...@cn.fujitsu.com) wrote:
We need a buffer to store migration data.
On save side:
all saved data was write into colo buffer first, so that we can know
the total size of the migration data. this can also separate the
Dear KVM Developers:
I have some questions about how KVM hypervisor requests and allocate
physical pages to the VM. I am using kernel version 3.2.14.
I run a microbenchmark in the VM, which declares an array with certain
size and then assigns some value to all the elements in the array,
which
On Tue, Sep 16, 2014 at 3:34 PM, Radim Krčmář rkrc...@redhat.com wrote:
[Emergency posting to fix the tag and couldn't find unmangled Cc list,
so some recipients were dropped, sorry. (I guess you are glad though).]
2014-09-16 14:01-0700, Andres Lagar-Cavilla:
On Tue, Sep 16, 2014 at 1:51
57 matches
Mail list logo