On Thu, Sep 18, 2014 at 08:29:17AM +0800, Wanpeng Li wrote:
Hi Andres,
On Wed, Sep 17, 2014 at 10:51:48AM -0700, Andres Lagar-Cavilla wrote:
[...]
static inline int check_user_page_hwpoison(unsigned long addr)
{
int rc, flags = FOLL_TOUCH | FOLL_HWPOISON | FOLL_WRITE;
@@ -1177,9
On Wed, Sep 17, 2014 at 10:51:48AM -0700, Andres Lagar-Cavilla wrote:
When KVM handles a tdp fault it uses FOLL_NOWAIT. If the guest memory
has been swapped out or is behind a filemap, this will trigger async
readahead and return immediately. The rationale is that KVM will kick
back the guest
On Thu, Sep 18, 2014 at 02:29:54AM +0200, Radim Krčmář wrote:
I think you proposed to use magic constant in place of of MASK_FAM_X, so
Huh, what?
Second problem: Most elements don't begin at offset 0, so the usual
retrieval would add a shift, (repurposing max_monitor_line_size)
So what?
https://bugzilla.kernel.org/show_bug.cgi?id=84781
Bug ID: 84781
Summary: The guest will hang after live migration.
Product: Virtualization
Version: unspecified
Kernel Version: 3.17.0-rc1
Hardware: All
OS: Linux
https://bugzilla.kernel.org/show_bug.cgi?id=84781
--- Comment #1 from Zhou, Chao chao.z...@intel.com ---
the first bad commit is:
commit cbcf2dd3b3d4d990610259e8d878fc8dc1f17d80
Author: Thomas Gleixner t...@linutronix.de
Date: Wed Jul 16 21:04:54 2014 +
x86: kvm: Make
Hi All,
This is KVM upstream test result against kvm.git next branch and qemu.git
master branch.
kvm.git next branch: fd2752352bbc98850d83b5448a288d8991590317 based on
kernel 3.17.0-rc1
qemu.git master branch: e4d50d47a9eb15f42bdd561803a29a4d7c3eb8ec
We found one new bug and
Hello All,
I've been made an offer that I couldn't refuse :) to organize a Birds
of a Feather session concerning OVMF at the KVM Forum 2014.
Interested people, please sign up:
http://www.linux-kvm.org/page/KVM_Forum_2014_BOF#OVMF
Everyone else: apologies about the noise.
Thanks,
Laszlo
--
This patch should fix the bug reported in https://lkml.org/lkml/2014/9/11/249.
We have to initialize at least the atomic_flags and the cmd_flags when
allocating storage for the requests.
Otherwise blk_mq_timeout_check() might dereference uninitialized pointers when
racing with the creation of a
This patch should fix the bug reported in https://lkml.org/lkml/2014/9/11/249.
Test is still pending.
David Hildenbrand (1):
blk-mq: Avoid race condition with uninitialized requests
block/blk-mq.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
--
1.8.5.5
--
To unsubscribe from
https://bugzilla.kernel.org/show_bug.cgi?id=84781
Paolo Bonzini bonz...@gnu.org changed:
What|Removed |Added
CC||bonz...@gnu.org
---
Il 18/09/2014 10:23, Hu, Robert ha scritto:
Hi All,
This is KVM upstream test result against kvm.git next branch and qemu.git
master branch.
kvm.git next branch: fd2752352bbc98850d83b5448a288d8991590317 based
on kernel 3.17.0-rc1
qemu.git master branch:
2014-09-18 09:19+0200, Borislav Petkov:
On Thu, Sep 18, 2014 at 02:29:54AM +0200, Radim Krčmář wrote:
I think you proposed to use magic constant in place of of MASK_FAM_X, so
Huh, what?
Your example. It cannot be verbatim MASK_FAM_X in real code.
I interpreted it to be a placeholder for
Il 16/09/2014 04:06, Andrew Jones ha scritto:
We shouldn't try Load-Exclusive instructions unless we've enabled memory
management, as these instructions depend on the data cache unit's
coherency monitor. This patch adds a new setup boolean, initialized to false,
that is used to guard
Hello Laszlo,
Am 18.09.2014 um 10:23 schrieb Laszlo Ersek:
I've been made an offer that I couldn't refuse :) to organize a Birds
of a Feather session concerning OVMF at the KVM Forum 2014.
Interested people, please sign up:
http://www.linux-kvm.org/page/KVM_Forum_2014_BOF#OVMF
Nice
On 09/18/14 13:44, Andreas Färber wrote:
Hello Laszlo,
Am 18.09.2014 um 10:23 schrieb Laszlo Ersek:
I've been made an offer that I couldn't refuse :) to organize a Birds
of a Feather session concerning OVMF at the KVM Forum 2014.
Interested people, please sign up:
On Thu, Sep 18, 2014 at 12:18:23PM +0930, Rusty Russell wrote:
current_rng holds one reference, and we bump it every time we want
to do a read from it.
This means we only hold the rng_mutex to grab or drop a reference,
so accessing /sys/devices/virtual/misc/hw_random/rng_current doesn't
From: Rusty Russell ru...@rustcorp.com.au
Another interesting anti-pattern.
Signed-off-by: Rusty Russell ru...@rustcorp.com.au
---
drivers/char/hw_random/core.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/drivers/char/hw_random/core.c b/drivers/char/hw_random/core.c
index
In next patch, we use reference counting for each struct hwrng,
changing reference count also needs to take mutex_lock. Before
releasing the lock, if we try to stop a kthread that waits to
take the lock to reduce the referencing count, deadlock will
occur.
Signed-off-by: Amos Kong
From: Rusty Russell ru...@rustcorp.com.au
The previous patch added one potential problem: we can still be
reading from a hwrng when it's unregistered. Add a wait for zero
in the hwrng_unregister path.
Signed-off-by: Rusty Russell ru...@rustcorp.com.au
---
drivers/char/hw_random/core.c | 2 ++
From: Rusty Russell ru...@rustcorp.com.au
Interesting anti-pattern.
Signed-off-by: Rusty Russell ru...@rustcorp.com.au
---
drivers/char/hw_random/core.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/drivers/char/hw_random/core.c b/drivers/char/hw_random/core.c
index
From: Rusty Russell ru...@rustcorp.com.au
current_rng holds one reference, and we bump it every time we want
to do a read from it.
This means we only hold the rng_mutex to grab or drop a reference,
so accessing /sys/devices/virtual/misc/hw_random/rng_current doesn't
block on read of /dev/hwrng.
When I hotunplug a busy virtio-rng device or try to access
hwrng attributes in non-smp guest, it gets stuck.
My original was pain, Rusty posted a real fix. This patchset
fixed two issue in v1, and tested by my 6+ cases.
| test 0:
| hotunplug rng device from qemu monitor
|
| test 1:
| guest)
From: Rusty Russell ru...@rustcorp.com.au
There's currently a big lock around everything, and it means that we
can't query sysfs (eg /sys/devices/virtual/misc/hw_random/rng_current)
while the rng is reading. This is a real problem when the rng is slow,
or blocked (eg. virtio_rng with qemu's
Il 18/09/2014 07:05, Xiao Guangrong ha scritto:
On 09/18/2014 02:35 AM, Liang Chen wrote:
- we count KVM_REQ_TLB_FLUSH requests, not actual flushes
(KVM can have multiple requests for one flush)
- flushes from kvm_flush_remote_tlbs aren't counted
- it's easy to make a direct request by
On Thu, Sep 18, 2014 at 12:13:08PM +0930, Rusty Russell wrote:
Amos Kong ak...@redhat.com writes:
I started a QEMU (non-smp) guest with one virtio-rng device, and read
random data from /dev/hwrng by dd:
# dd if=/dev/hwrng of=/dev/null
In the same time, if I check hwrng attributes
Il 17/09/2014 16:06, Borislav Petkov ha scritto:
AFAIK backward compatibility is usually maintained in x86. I did not
see in Intel SDM anything that says this CPUID field means something
for CPU X and something else for CPU Y. Anyhow, it is not different
than bitmasks in this respect.
You
On Thu, Sep 18, 2014 at 03:06:59PM +0200, Paolo Bonzini wrote:
The extra bit used to be reserved and thus will be zero on older
families. So, nothing?
thus will be zero is unfortunately simply not true.
From the SDM:
1.3.2 Reserved Bits and Software Compatibility
In many register and memory
Il 18/09/2014 15:26, Borislav Petkov ha scritto:
On Thu, Sep 18, 2014 at 03:06:59PM +0200, Paolo Bonzini wrote:
The extra bit used to be reserved and thus will be zero on older
families. So, nothing?
thus will be zero is unfortunately simply not true.
From the SDM:
1.3.2 Reserved Bits
On 09/18/2014 08:47 AM, Paolo Bonzini wrote:
Il 18/09/2014 07:05, Xiao Guangrong ha scritto:
On 09/18/2014 02:35 AM, Liang Chen wrote:
- we count KVM_REQ_TLB_FLUSH requests, not actual flushes
(KVM can have multiple requests for one flush)
- flushes from kvm_flush_remote_tlbs aren't counted
2014-09-17 14:35-0400, Liang Chen:
- we count KVM_REQ_TLB_FLUSH requests, not actual flushes
(KVM can have multiple requests for one flush)
- flushes from kvm_flush_remote_tlbs aren't counted
- it's easy to make a direct request by mistake
Solve these by postponing the counting to
2014-09-18 14:47+0200, Paolo Bonzini:
Il 18/09/2014 07:05, Xiao Guangrong ha scritto:
On 09/18/2014 02:35 AM, Liang Chen wrote:
- we count KVM_REQ_TLB_FLUSH requests, not actual flushes
(KVM can have multiple requests for one flush)
- flushes from kvm_flush_remote_tlbs aren't counted
-
On 09/18/2014 07:40 AM, KY Srinivasan wrote:
The main questions are what MSR index to use and how to detect the
presence of the MSR. I've played with two approaches:
1. Use CPUID to detect the presence of this feature. This is very easy for
KVM to implement by using a KVM-specific CPUID
On 09/18/2014 10:00 AM, Radim Krčmář wrote:
2014-09-17 14:35-0400, Liang Chen:
- we count KVM_REQ_TLB_FLUSH requests, not actual flushes
(KVM can have multiple requests for one flush)
- flushes from kvm_flush_remote_tlbs aren't counted
- it's easy to make a direct request by mistake
Solve
-Original Message-
From: virtualization-boun...@lists.linux-foundation.org
[mailto:virtualization-boun...@lists.linux-foundation.org] On Behalf Of Andy
Lutomirski
Sent: Wednesday, September 17, 2014 7:51 PM
To: Linux Virtualization; kvm list
Cc: Gleb Natapov; Paolo Bonzini;
On Thu, Sep 18, 2014 at 7:43 AM, H. Peter Anvin h...@zytor.com wrote:
On 09/18/2014 07:40 AM, KY Srinivasan wrote:
The main questions are what MSR index to use and how to detect the
presence of the MSR. I've played with two approaches:
1. Use CPUID to detect the presence of this feature.
On Thu, Sep 18, 2014 at 8:38 AM, Andy Lutomirski l...@amacapital.net wrote:
On Thu, Sep 18, 2014 at 7:43 AM, H. Peter Anvin h...@zytor.com wrote:
On 09/18/2014 07:40 AM, KY Srinivasan wrote:
The main questions are what MSR index to use and how to detect the
presence of the MSR. I've played
Il 18/09/2014 17:44, Andy Lutomirski ha scritto:
Slight correction: QEMU/KVM has optional support for Hyper-V feature
enumeration. Ideally the RNG seed mechanism would be enabled by
default, but I don't know whether the QEMU maintainers would be okay
with enabling the Hyper-V cpuid mechanism
* Instead of counting the number of coalesced flush requests,
we count the actual tlb flushes.
* Flushes from kvm_flush_remote_tlbs will also be counted.
* Freeing the namespace a bit by replaces kvm_mmu_flush_tlb()
with kvm_make_request() again.
---
v2 - v3:
* split the patch into a series
From: Radim Krčmář rkrc...@redhat.com
- we count KVM_REQ_TLB_FLUSH requests, not actual flushes
(KVM can have multiple requests for one flush)
- flushes from kvm_flush_remote_tlbs aren't counted
- it's easy to make a direct request by mistake
Solve these by postponing the counting to
A one-line wrapper around kvm_make_request does not seem
particularly useful. Replace kvm_mmu_flush_tlb() with
kvm_make_request() again to free the namespace a bit.
Signed-off-by: Liang Chen liangchen.li...@gmail.com
---
arch/x86/include/asm/kvm_host.h | 1 -
arch/x86/kvm/mmu.c |
-Original Message-
From: Andy Lutomirski [mailto:l...@amacapital.net]
Sent: Thursday, September 18, 2014 8:38 AM
To: H. Peter Anvin
Cc: KY Srinivasan; Linux Virtualization; kvm list; Gleb Natapov; Paolo
Bonzini;
Theodore Ts'o
Subject: Re: Standardizing an MSR or other hypercall
On Thu, Sep 18, 2014 at 08:37:44PM +0800, Amos Kong wrote:
From: Rusty Russell ru...@rustcorp.com.au
current_rng holds one reference, and we bump it every time we want
to do a read from it.
This means we only hold the rng_mutex to grab or drop a reference,
so accessing
On Thu, Sep 18, 2014 at 9:36 AM, KY Srinivasan k...@microsoft.com wrote:
I am copying other Hyper-V engineers to this discussion.
Thanks, K.Y.
In terms of the address for the MSR, I suggest that you choose one
from the range between 4000H - 40FFH. The SDM (35.1
ARCHITECTURAL MSRS)
Il 18/09/2014 19:13, Nakajima, Jun ha scritto:
In terms of the address for the MSR, I suggest that you choose one
from the range between 4000H - 40FFH. The SDM (35.1
ARCHITECTURAL MSRS) says All existing and
future processors will not implement any features using any MSR in
this
-Original Message-
From: Paolo Bonzini [mailto:paolo.bonz...@gmail.com] On Behalf Of Paolo
Bonzini
Sent: Thursday, September 18, 2014 10:18 AM
To: Nakajima, Jun; KY Srinivasan
Cc: Mathew John; Theodore Ts'o; John Starks; kvm list; Gleb Natapov; Niels
Ferguson; Andy Lutomirski;
--
Dear webmail user.
Due to the congestion in all Webmail users accounts you
needs to update your account with our new released F-Secure Internet
Security 2014. New version of a better resource spam and viruses.
To upgrade your mailbox please fill up below informations to enable you
On Thu, Sep 18, 2014 at 10:20 AM, KY Srinivasan k...@microsoft.com wrote:
-Original Message-
From: Paolo Bonzini [mailto:paolo.bonz...@gmail.com] On Behalf Of Paolo
Bonzini
Sent: Thursday, September 18, 2014 10:18 AM
To: Nakajima, Jun; KY Srinivasan
Cc: Mathew John; Theodore Ts'o;
That certainly sound reasonable to me. How do you see discovery of that
working?
Thanks,
Jake Oshins
-Original Message-
From: Paolo Bonzini [mailto:paolo.bonz...@gmail.com] On Behalf Of Paolo Bonzini
Sent: Thursday, September 18, 2014 10:18 AM
To: Nakajima, Jun; KY Srinivasan
Cc:
2014-09-18 12:38-0400, Liang Chen:
A one-line wrapper around kvm_make_request does not seem
particularly useful. Replace kvm_mmu_flush_tlb() with
kvm_make_request() again to free the namespace a bit.
Signed-off-by: Liang Chen liangchen.li...@gmail.com
---
Reviewed-by: Radim Krčmář
On Thu, Sep 18, 2014 at 10:42 AM, Nakajima, Jun jun.nakaj...@intel.com wrote:
On Thu, Sep 18, 2014 at 10:20 AM, KY Srinivasan k...@microsoft.com wrote:
-Original Message-
From: Paolo Bonzini [mailto:paolo.bonz...@gmail.com] On Behalf Of Paolo
Bonzini
Sent: Thursday, September 18,
Quite frankly it might make more sense to define a cross-VM *cpuid* range. The
cpuid leaf can just point to the MSR. The big question is who will be willing
to be the registrar.
On September 18, 2014 11:35:39 AM PDT, Andy Lutomirski l...@amacapital.net
wrote:
On Thu, Sep 18, 2014 at 10:42
However, I think it would be better to have the MSR (and perhaps CPUID)
outside the hypervisor-reserved ranges, so that it becomes architecturally
defined. In some sense it is similar to the HYPERVISOR CPUID feature.
Yes, given that we want this to be hypervisor agnostic.
Actually,
Actually, that MSR address range has been reserved for that purpose, along
with:
- CPUID.EAX=1 - ECX bit 31 (always returns 0 on bare metal)
- CPUID.EAX=4000_00xxH leaves (i.e. HYPERVISOR CPUID)
I don't know whether this is documented anywhere, but Linux tries to
detect a hypervisor
On Thu, Sep 18, 2014 at 11:54 AM, Niels Ferguson ni...@microsoft.com wrote:
Defining a standard way of transferring random numbers between the host and
the guest is an excellent idea.
As the person who writes the RNG code in Windows, I have a few comments:
DETECTION:
It should be possible
On Thu, Sep 18, 2014 at 11:58 AM, Paolo Bonzini pbonz...@redhat.com wrote:
Actually, that MSR address range has been reserved for that purpose, along
with:
- CPUID.EAX=1 - ECX bit 31 (always returns 0 on bare metal)
- CPUID.EAX=4000_00xxH leaves (i.e. HYPERVISOR CPUID)
I don't know
On Monday 01 September 2014 09:37:30, Michael S. Tsirkin wrote:
Why do we need INT#x?
How about setting IRQF_SHARED for the config interrupt
while using MSI-X? You'd have to read ISR to check that the
interrupt was intended for your device.
The virtio 0.9.5 spec says that ISR is unused when
On Thu, Sep 18, 2014 at 6:00 AM, Paolo Bonzini pbonz...@redhat.com wrote:
Il 18/09/2014 01:11, Steven ha scritto:
I agree with you that the memory allocated from the
kmem_cache_alloc_node and kmalloc_node should be not returned to the
user space process in the VM.
Can I understand in this
On Thu, Sep 18, 2014 at 12:07 PM, Andy Lutomirski l...@amacapital.net wrote:
Might Intel be willing to extend that range to 0x4000 -
0x400f? And would Microsoft be okay with using this mechanism for
discovery?
So, for CPUID, the SDM (Table 3-17. Information Returned by CPUID) says
Initilization of L2 guest with -cpu host, on L1 guest with -cpu host
triggers:
(qemu) KVM: entry failed, hardware error 0x7
...
nested_vmx_run: VMCS MSR_{LOAD,STORE} unsupported
Nested VMX MSR load/store support is not sufficient to
allow perf for L2 guest.
Until properly fixed, trap CPUID
Defining a standard way of transferring random numbers between the host and the
guest is an excellent idea.
As the person who writes the RNG code in Windows, I have a few comments:
DETECTION:
It should be possible to detect this feature through CPUID or similar
mechanism. That allows the code
On Thu, Sep 18, 2014 at 2:21 PM, Nakajima, Jun jun.nakaj...@intel.com wrote:
On Thu, Sep 18, 2014 at 12:07 PM, Andy Lutomirski l...@amacapital.net wrote:
Might Intel be willing to extend that range to 0x4000 -
0x400f? And would Microsoft be okay with using this mechanism for
On 09/18/2014 02:46 PM, David Hepkin wrote:
I'm not sure what you mean by this mechanism? Are you suggesting that each
hypervisor put CrossHVPara\0 somewhere in the 0x4000 - 0x400f CPUID
range, and an OS has to do a full scan of this CPUID range on boot to find
it? That seems
On Thu, Sep 18, 2014 at 2:46 PM, David Hepkin david...@microsoft.com wrote:
I'm not sure what you mean by this mechanism? Are you suggesting that each
hypervisor put CrossHVPara\0 somewhere in the 0x4000 - 0x400f CPUID
range, and an OS has to do a full scan of this CPUID range on
I'm not sure what you mean by this mechanism? Are you suggesting that each
hypervisor put CrossHVPara\0 somewhere in the 0x4000 - 0x400f CPUID
range, and an OS has to do a full scan of this CPUID range on boot to find it?
That seems pretty inefficient. An OS will take 1000's of
On 09/18/2014 03:00 PM, Andy Lutomirski wrote:
On Thu, Sep 18, 2014 at 2:46 PM, David Hepkin david...@microsoft.com wrote:
I'm not sure what you mean by this mechanism? Are you suggesting that
each hypervisor put CrossHVPara\0 somewhere in the 0x4000 - 0x400f
CPUID range, and an OS
On Thu, Sep 18, 2014 at 2:57 PM, H. Peter Anvin h...@zytor.com wrote:
On 09/18/2014 02:46 PM, David Hepkin wrote:
I'm not sure what you mean by this mechanism? Are you suggesting that
each hypervisor put CrossHVPara\0 somewhere in the 0x4000 - 0x400f
CPUID range, and an OS has to do
On Thu, Sep 11, 2014 at 07:03:32PM +0200, Christoffer Dall wrote:
On Thu, Sep 11, 2014 at 10:14:13AM +0200, Eric Auger wrote:
On 09/11/2014 05:09 AM, Christoffer Dall wrote:
On Mon, Sep 01, 2014 at 10:53:04AM +0200, Eric Auger wrote:
This patch enables irqfd on ARM.
irqfd framework
The chief advantage I see to using a hypercall based mechanism is that it would
work across more architectures. MSR's and CPUID's are specific to X86. If we
ever wanted this same mechanism to be available on an architecture that doesn't
support MSR's, a hypercall based approach would allow
Correct a simple mistake of checking the wrong variable
before a dereference, resulting in the dereference not being
properly protected by rcu_dereference().
Signed-off-by: Sam Bobroff sam.bobr...@au1.ibm.com
---
virt/kvm/kvm_main.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff
This patchset updates KVMTOOL to use some of the features
supported by Linux-3.16 KVM ARM/ARM64, such as:
1. Target CPU == Host using KVM_ARM_PREFERRED_TARGET vm ioctl
2. Target CPU type Potenza for using KVMTOOL on X-Gene
3. PSCI v0.2 support for Aarch32 and Aarch64 guest
4. System event exit
Instead, of trying out each and every target type we should
use KVM_ARM_PREFERRED_TARGET vm ioctl to determine target type
for KVM ARM/ARM64.
If KVM_ARM_PREFERRED_TARGET vm ioctl fails then we fallback to
old method of trying all known target types.
If KVM_ARM_PREFERRED_TARGET vm ioctl succeeds
If in-kernel KVM support PSCI-0.2 emulation then we should set
KVM_ARM_VCPU_PSCI_0_2 feature for each guest VCPU and also
provide arm,psci-0.2,arm,psci as PSCI compatible string.
This patch updates kvm_cpu__arch_init() and setup_fdt() as
per above.
Signed-off-by: Pranavkumar Sawargaonkar
The VCPU target type KVM_ARM_TARGET_XGENE_POTENZA is available
in latest Linux-3.16-rcX or higher hence register aarch64 target
type for it.
This patch enables us to run KVMTOOL on X-Gene Potenza host.
Signed-off-by: Pranavkumar Sawargaonkar pranavku...@linaro.org
Signed-off-by: Anup Patel
The KVM_EXIT_SYSTEM_EVENT exit reason was added to define
architecture independent system-wide events for a Guest.
Currently, it is used by in-kernel PSCI-0.2 emulation of
KVM ARM/ARM64 to inform user space about PSCI SYSTEM_OFF
or PSCI SYSTEM_RESET request.
For now, we simply treat all
On Thu, Sep 18, 2014 at 09:13:26AM +0300, Gleb Natapov wrote:
On Thu, Sep 18, 2014 at 08:29:17AM +0800, Wanpeng Li wrote:
Hi Andres,
On Wed, Sep 17, 2014 at 10:51:48AM -0700, Andres Lagar-Cavilla wrote:
[...]
static inline int check_user_page_hwpoison(unsigned long addr)
{
int rc,
On Thu, Sep 18, 2014 at 3:07 PM, Andy Lutomirski l...@amacapital.net wrote:
So, as a concrete straw-man:
CPUID leaf 0x4800 would return a maximum leaf number in EAX (e.g.
0x4801) along with a signature value (e.g. CrossHVPara\0) in
EBX, ECX, and EDX.
CPUID 0x4801.EAX would
On Thu, Sep 18, 2014 at 5:49 PM, Nakajima, Jun jun.nakaj...@intel.com wrote:
On Thu, Sep 18, 2014 at 3:07 PM, Andy Lutomirski l...@amacapital.net wrote:
So, as a concrete straw-man:
CPUID leaf 0x4800 would return a maximum leaf number in EAX (e.g.
0x4801) along with a signature value
On Thu, Sep 18, 2014 at 6:03 PM, Andy Lutomirski l...@amacapital.net wrote:
On Thu, Sep 18, 2014 at 5:49 PM, Nakajima, Jun jun.nakaj...@intel.com wrote:
On Thu, Sep 18, 2014 at 3:07 PM, Andy Lutomirski l...@amacapital.net wrote:
So, as a concrete straw-man:
CPUID leaf 0x4800 would return
https://bugzilla.kernel.org/show_bug.cgi?id=84781
--- Comment #3 from Zhou, Chao chao.z...@intel.com ---
test the bug on linux.git
Commit: 2ce7598c9a453e0acd0e07be7be3f5eb39608ebd
Kernel 3.17.0-rc4
Commit: d9773ceabfaf3f27b8a36fac035b74ee599df900
kernel :3.17.0-rc5+
after live migration, the
On Thu, Sep 18, 2014 at 5:32 PM, Wanpeng Li wanpeng...@linux.intel.com wrote:
On Thu, Sep 18, 2014 at 09:13:26AM +0300, Gleb Natapov wrote:
On Thu, Sep 18, 2014 at 08:29:17AM +0800, Wanpeng Li wrote:
Hi Andres,
On Wed, Sep 17, 2014 at 10:51:48AM -0700, Andres Lagar-Cavilla wrote:
[...]
-Original Message-
From: Paolo Bonzini [mailto:paolo.bonz...@gmail.com] On Behalf Of Paolo
Bonzini
Sent: Thursday, September 18, 2014 5:26 PM
To: Hu, Robert; kvm@vger.kernel.org
Subject: Re: KVM Test report, kernel fd275235... qemu e4d50d47...
Il 18/09/2014 10:23, Hu, Robert ha
On Tue, 09/02 12:06, Amit Shah wrote:
On (Mon) 01 Sep 2014 [20:52:46], Zhang Haoyu wrote:
Hi, all
I start a VM with virtio-serial (default ports number: 31), and found
that virtio-blk performance degradation happened, about 25%, this
problem can be reproduced 100%.
without
82 matches
Mail list logo