On Mon, 2014-02-24 at 03:01 +, Zhang, Yang Z wrote:
Vadim Rozenfeld wrote on 2014-02-14:
On Fri, 2014-02-14 at 02:35 +, Liu, RongrongX wrote:
Vadim Rozenfeld wrote on 2014-02-12:
On Wed, 2014-02-12 at 01:33 +, Zhang, Yang Z wrote:
Vadim Rozenfeld wrote on 2014-02-10:
On Mon,
Vadim Rozenfeld wrote on 2014-02-24:
On Mon, 2014-02-24 at 03:01 +, Zhang, Yang Z wrote:
Vadim Rozenfeld wrote on 2014-02-14:
On Fri, 2014-02-14 at 02:35 +, Liu, RongrongX wrote:
Vadim Rozenfeld wrote on 2014-02-12:
On Wed, 2014-02-12 at 01:33 +, Zhang, Yang Z wrote:
Vadim
On Mon, 2014-02-24 at 08:35 +, Zhang, Yang Z wrote:
Vadim Rozenfeld wrote on 2014-02-24:
On Mon, 2014-02-24 at 03:01 +, Zhang, Yang Z wrote:
Vadim Rozenfeld wrote on 2014-02-14:
On Fri, 2014-02-14 at 02:35 +, Liu, RongrongX wrote:
Vadim Rozenfeld wrote on 2014-02-12:
On Wed,
On Mon, Feb 24, 2014 at 05:32:24AM +, Nicholas A. Bellinger wrote:
From: Nicholas Bellinger n...@linux-iscsi.org
Hi MST, MKP, Paolo Co,
The following is an initial RFC series for allowing vhost/scsi to
accept T10 protection information (PI) as seperate SGLs along side
existing data
Il 24/02/2014 06:32, Nicholas A. Bellinger ha scritto:
AFAICT up until this point the -prio field has been unused, but
I'm certainly open to better ways of signaling (to vhost) that some
number of metadata iovs are to be expected.. Any thoughts..?
Hi nab,
the virtio-scsi side of the patch is
Liu, Jinsong wrote:
Paolo Bonzini wrote:
Il 21/02/2014 18:57, Liu, Jinsong ha scritto:
- F(BMI2) | F(ERMS) | f_invpcid | F(RTM) | F(RDSEED) |
+ F(BMI2) | F(ERMS) | f_invpcid | F(RTM) | F(MPX) | F(RDSEED) |
F(ADX);
MPX also needs to be conditional on
These patches are version 5 to enable Intel MPX for KVM.
Version 1:
* Add some Intel MPX definiation
* Fix a cpuid(0x0d, 0) exposing bug, dynamic per XCR0 features enable/disable
* vmx and msr handle for MPX support at KVM
* enalbe MPX feature for guest
Version 2:
* remove generic MPX
From caddc009a6d2019034af8f2346b2fd37a81608d0 Mon Sep 17 00:00:00 2001
From: Liu Jinsong jinsong@intel.com
Date: Mon, 24 Feb 2014 18:11:11 +0800
Subject: [PATCH v5 1/3] KVM: x86: Intel MPX vmx and msr handle
This patch handle vmx and msr of Intel MPX feature.
Signed-off-by: Xudong Hao
From 5d5a80cd172ea6fb51786369bcc23356b1e9e956 Mon Sep 17 00:00:00 2001
From: Liu Jinsong jinsong@intel.com
Date: Mon, 24 Feb 2014 18:11:55 +0800
Subject: [PATCH v5 2/3] KVM: x86: add MSR_IA32_BNDCFGS to msrs_to_save
Add MSR_IA32_BNDCFGS to msrs_to_save, and corresponding logic
to
From 44c2abca2c2eadc6f2f752b66de4acc8131880c4 Mon Sep 17 00:00:00 2001
From: Liu Jinsong jinsong@intel.com
Date: Mon, 24 Feb 2014 18:12:31 +0800
Subject: [PATCH v5 3/3] KVM: x86: Enable Intel MPX for guest
This patch enable Intel MPX feature to guest.
Signed-off-by: Xudong Hao
Il 24/02/2014 11:58, Liu, Jinsong ha scritto:
@@ -599,6 +599,9 @@ int __kvm_set_xcr(struct kvm_vcpu *vcpu, u32 index, u64 xcr)
u64 old_xcr0 = vcpu-arch.xcr0;
u64 valid_bits;
+ if (!kvm_x86_ops-mpx_supported || !kvm_x86_ops-mpx_supported())
+ xcr0 =
I agree it's either COW breaking or (similarly) locking pages that
the guest hasn't touched yet.
You can use prealloc or -rt mlock=on to avoid this problem.
Paolo
Or the new shared flag - IIRC shared VMAs don't do COW either.
Only if the problem isn't locking and zeroing of
When starting lots of dataplane devices the bootup takes very long on my
s390 system(prototype irqfd code). With larger setups we are even able
to trigger some timeouts in some userspace components.
Turns out that the KVM_SET_GSI_ROUTING ioctl takes very
long (strace claims up to 0.1 sec) when
Il 24/02/2014 12:58, Christian Borntraeger ha scritto:
When starting lots of dataplane devices the bootup takes very long on my
s390 system(prototype irqfd code). With larger setups we are even able
to trigger some timeouts in some userspace components.
Turns out that the KVM_SET_GSI_ROUTING
On Thu, Feb 20, 2014 at 11:21:10AM +1100, Chris Maltby wrote:
I'm not sure that this is the right place to raise this issue, so
apologies in advance.
I have an RHEL6 system with two qemu/kvm guests, one running RHEL5
(x64) and one running Windows 7 (x64). These variously provide local
and
with vhost tx zero_copy, guest nic might get hang when host reserving
skb in socket queue delivered by guest, the case has been solved in
tun, it also been needed by bridge. This could easily happened when a
LAST_ACK state tcp occuring between guest and host.
Signed-off-by: Chuanyu Qin
with vhost tx zero_copy, guest nic might get hang when host reserving
skb in socket queue delivered by guest, the case has been solved in
tun, it also been needed by openvswitch. This could easily happened
when a LAST_ACK state tcp occuring between guest and host.
Signed-off-by: Chuanyu Qin
On Mon, Feb 24, 2014 at 09:12:20PM +0800, Qin Chuanyu wrote:
with vhost tx zero_copy, guest nic might get hang when host reserving
skb in socket queue delivered by guest, the case has been solved in
tun, it also been needed by bridge. This could easily happened when a
LAST_ACK state tcp
On Mon, Feb 24, 2014 at 09:15:12PM +0800, Qin Chuanyu wrote:
with vhost tx zero_copy, guest nic might get hang when host reserving
skb in socket queue delivered by guest, the case has been solved in
tun, it also been needed by openvswitch. This could easily happened
when a LAST_ACK state tcp
On 2014/2/24 21:29, Michael S. Tsirkin wrote:
On Mon, Feb 24, 2014 at 09:12:20PM +0800, Qin Chuanyu wrote:
with vhost tx zero_copy, guest nic might get hang when host reserving
skb in socket queue delivered by guest, the case has been solved in
tun, it also been needed by bridge. This could
On Fri, 14 Feb 2014 10:55:31 +0100
Christian Borntraeger borntrae...@de.ibm.com wrote:
On 14/02/14 00:32, Paolo Bonzini wrote:
Il 13/02/2014 23:54, Christian Borntraeger ha scritto:
We had several variants but in the end we tried to come up with a patch
that does not
influence other
On 2014/2/24 21:52, Qin Chuanyu wrote:
On 2014/2/24 21:29, Michael S. Tsirkin wrote:
On Mon, Feb 24, 2014 at 09:12:20PM +0800, Qin Chuanyu wrote:
with vhost tx zero_copy, guest nic might get hang when host reserving
skb in socket queue delivered by guest, the case has been solved in
tun, it
Hi,
I'm also planning a similar patch, but it will call skb_orphan_frags on
the skb in datapath.c::queue_userspace_packet, right before
skb_zerocopy, so packets sent up to userspace via Netlink doesn't harm
guests. I haven't checked your patch thoroughly, does it handle a
different scenario?
Commit 3b1274463fa8d074dd3bc77efe25b59a4ddd491e uses GCCs extension
labels as values to handle exceptions, but GCC 4.8 ``mistakingly''
uses the next body function as a jump label, for functions which
do not return. Fixed by returning a int value for those functions.
See
Paolo Bonzini wrote:
Il 24/02/2014 11:58, Liu, Jinsong ha scritto:
@@ -599,6 +599,9 @@ int __kvm_set_xcr(struct kvm_vcpu *vcpu, u32
index, u64 xcr) u64 old_xcr0 = vcpu-arch.xcr0;
u64 valid_bits;
+if (!kvm_x86_ops-mpx_supported || !kvm_x86_ops-mpx_supported())
+
When the tsc is marked unstable on the host it causes global clock
updates to be requested each time a vcpu is loaded, nearly halting
all progress on guests with a large number of vcpus.
Fix this by only requesting a local clock update unless the vcpu
is migrating to another cpu.
Signed-off-by:
On Mon, Feb 24, 2014 at 09:52:11PM +0800, Qin Chuanyu wrote:
On 2014/2/24 21:29, Michael S. Tsirkin wrote:
On Mon, Feb 24, 2014 at 09:12:20PM +0800, Qin Chuanyu wrote:
with vhost tx zero_copy, guest nic might get hang when host reserving
skb in socket queue delivered by guest, the case has
On 2014-02-24 16:25, Marius Vlad wrote:
Commit 3b1274463fa8d074dd3bc77efe25b59a4ddd491e uses GCCs extension
labels as values to handle exceptions, but GCC 4.8 ``mistakingly''
uses the next body function as a jump label, for functions which
do not return. Fixed by returning a int value for
Il 24/02/2014 16:37, Liu, Jinsong ha scritto:
So patch v5 would be applied except you will remove the incorrect
hunk, and you will send a patch strengthenning guest_supported_xcr0?
Yes.
Paolo
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to
On Mon, Feb 24, 2014 at 04:58:45PM +0100, Jan Kiszka wrote:
On 2014-02-24 16:25, Marius Vlad wrote:
Commit 3b1274463fa8d074dd3bc77efe25b59a4ddd491e uses GCCs extension
labels as values to handle exceptions, but GCC 4.8 ``mistakingly''
uses the next body function as a jump label, for
Read-only large sptes can be created due to read-only faults as
follows:
- QEMU pagetable entry that maps guest memory is read-only
due to COW.
- Guest read faults such memory, COW is not broken, because
it is a read-only fault.
- Enable dirty logging, large spte not nuked because it is
On Thu, 2014-02-20 at 12:31 -0800, Luis R. Rodriguez wrote:
On Wed, Feb 19, 2014 at 4:56 PM, Dan Williams d...@redhat.com wrote:
Note that there isn't yet a disable_ipv4 knob though, I was
perhaps-too-subtly trying to get you to send a patch for it, since I can
use it too :)
Sure, can
Hi,
I noticed that KVM (with VMX at least) enters an inifite loop of
vmentries and ept-violations when it has to set the accessed bit in a
guest page table that is in read-only memory (namely: the F-segment of
the BIOS). I don't think this is the proper reaction...
Jan
--
Siemens AG, Corporate
Great news: the organizations for Google Summer of Code 2014 have been
announced and QEMU is participating again this year!
If you are a student who is interested in a 12-week full-time project
working on QEMU, KVM, or libvirt this summer, head over to our project
ideas page:
On Mon, Feb 24, 2014 at 10:22 AM, Dan Williams d...@redhat.com wrote:
My use-case would simply be to have an analogue for the disable_ipv6
case. In the future I expect more people will want to disable IPv4 as
they move to IPv6. If you don't have something like disable_ipv4, then
there's no
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
On 01/23/2014 07:55 PM, Dave Hansen wrote:
On 01/21/2014 08:38 AM, Toralf Förster wrote:
Jan 21 17:18:57 n22 kernel: INFO: rcu_sched self-detected stall on CPU { 2}
(t=60001 jiffies g=18494 c=18493 q=183951)
Jan 21 17:18:57 n22 kernel: sending
On Mon, Feb 24, 2014 at 04:42:29PM +0100, Andrew Jones wrote:
When the tsc is marked unstable on the host it causes global clock
updates to be requested each time a vcpu is loaded, nearly halting
all progress on guests with a large number of vcpus.
Fix this by only requesting a local clock
From: Dan Williams d...@redhat.com
Date: Mon, 24 Feb 2014 12:22:00 -0600
In the future I expect more people will want to disable IPv4 as
they move to IPv6.
I definitely don't.
I've been lightly following this conversation and I have to say
a few things.
disable_ipv6 was added because people
On Mon, 2014-02-24 at 18:04 -0500, David Miller wrote:
From: Dan Williams d...@redhat.com
Date: Mon, 24 Feb 2014 12:22:00 -0600
In the future I expect more people will want to disable IPv4 as
they move to IPv6.
I definitely don't.
I've been lightly following this conversation and I
From: Ben Hutchings b...@decadent.org.uk
Date: Tue, 25 Feb 2014 00:02:00 +
You can run an internal network, or access network, as v6-only with
NAT64 and DNS64 at the border. I believe some mobile networks are doing
this; it was also done on the main FOSDEM wireless network this year.
On Mon, 2014-02-24 at 19:12 -0500, David Miller wrote:
From: Ben Hutchings b...@decadent.org.uk
Date: Tue, 25 Feb 2014 00:02:00 +
You can run an internal network, or access network, as v6-only with
NAT64 and DNS64 at the border. I believe some mobile networks are doing
this; it was
On 2014/2/24 23:49, Michael S. Tsirkin wrote:
On Mon, Feb 24, 2014 at 09:52:11PM +0800, Qin Chuanyu wrote:
On 2014/2/24 21:29, Michael S. Tsirkin wrote:
On Mon, Feb 24, 2014 at 09:12:20PM +0800, Qin Chuanyu wrote:
with vhost tx zero_copy, guest nic might get hang when host reserving
skb in
On 2014/2/24 22:45, Zoltan Kiss wrote:
Hi,
I'm also planning a similar patch, but it will call skb_orphan_frags on
the skb in datapath.c::queue_userspace_packet, right before
skb_zerocopy, so packets sent up to userspace via Netlink doesn't harm
guests. I haven't checked your patch thoroughly,
On Tue, Feb 25, 2014 at 02:01:59AM +, Ben Hutchings wrote:
On Mon, 2014-02-24 at 19:12 -0500, David Miller wrote:
From: Ben Hutchings b...@decadent.org.uk
Date: Tue, 25 Feb 2014 00:02:00 +
You can run an internal network, or access network, as v6-only with
NAT64 and DNS64 at
On 02/25/2014 12:59 AM, Marcelo Tosatti wrote:
Read-only large sptes can be created due to read-only faults as
follows:
- QEMU pagetable entry that maps guest memory is read-only
due to COW.
- Guest read faults such memory, COW is not broken, because
it is a read-only fault.
- Enable
I got a report of someone trying to run tests with a large amount of
RAM (4GB), which broke the guest as free_memory() function (called
by setup_vm()) will override the PCI hole.
Let's document memory constraints so that people don't do that.
Signed-off-by: Luiz Capitulino lcapitul...@redhat.com
On 02/24/2014 09:12 PM, Qin Chuanyu wrote:
with vhost tx zero_copy, guest nic might get hang when host reserving
skb in socket queue delivered by guest, the case has been solved in
tun, it also been needed by bridge. This could easily happened when a
LAST_ACK state tcp occuring between guest
On Mon, 2014-02-24 at 11:23 +0100, Paolo Bonzini wrote:
Il 24/02/2014 06:32, Nicholas A. Bellinger ha scritto:
AFAICT up until this point the -prio field has been unused, but
I'm certainly open to better ways of signaling (to vhost) that some
number of metadata iovs are to be expected..
We used to stop the handling of tx when the number of pending DMAs
exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation
of both host and guest. But it was too aggressive in some cases, since
any delay or blocking of a single packet may delay or block the guest
transmission.
guest kick vhost base on vring flag status and get perfermance improved,
vhost_zerocopy_callback could do this in the same way, as
virtqueue_enable_cb need one more check after change the status of
avail_ring flags, vhost also do the same thing after vhost_enable_notify
test result list as
On 02/25/2014 02:55 PM, Qin Chuanyu wrote:
guest kick vhost base on vring flag status and get perfermance improved,
vhost_zerocopy_callback could do this in the same way, as
virtqueue_enable_cb need one more check after change the status of
avail_ring flags, vhost also do the same thing after
On 2014/2/25 15:38, Jason Wang wrote:
On 02/25/2014 02:55 PM, Qin Chuanyu wrote:
guest kick vhost base on vring flag status and get perfermance improved,
vhost_zerocopy_callback could do this in the same way, as
virtqueue_enable_cb need one more check after change the status of
avail_ring
52 matches
Mail list logo