The implementation of the current virtio-balloon is not very
efficient, the time spends on different stages of inflating
the balloon to 7GB of a 8GB idle guest:
a. allocating pages (6.5%)
b. sending PFNs to host (68.3%)
c. address translation (6.1%)
d. madvise (19%)
It takes about 4126ms for the
Support the request for vm's unused page information, response with
a page bitmap. QEMU can make use of this bitmap and the dirty page
logging mechanism to skip the transportation of these unused pages,
this is very helpful to speed up the live migration process.
Signed-off-by: Liang Li
Cc: Micha
Save the unused page info into page bitmap. The virtio balloon
driver call this new API to get the unused page bitmap and send
the bitmap to hypervisor(QEMU) for speeding up live migration.
During sending the bitmap, some the pages may be modified and are
no free anymore, this inaccuracy can be cor
Define a new feature bit which supports a new virtual queue. This
new virtual qeuque is for information exchange between hypervisor
and guest. The VMM hypervisor can make use of this virtual queue
to request the guest do some operations, e.g. drop page cache,
synchronize file system, etc. And the V
This patch set contains two parts of changes to the virtio-balloon.
One is the change for speeding up the inflating & deflating process,
the main idea of this optimization is to use bitmap to send the page
information to host instead of the PFNs, to reduce the overhead of
virtio data transmission
Expose the function to get the max pfn, so it can be used in the
virtio-balloon device driver. Simply include the 'linux/bootmem.h'
is not enough, if the device driver is built to a module, directly
refer the max_pfn lead to build failed.
Signed-off-by: Liang Li
Cc: Andrew Morton
Cc: Mel Gorman
Will allow faster notifications using a bitmap down the road.
balloon_pfn_to_page() can be removed because it's useless.
Signed-off-by: Liang Li
Signed-off-by: Michael S. Tsirkin
Cc: Paolo Bonzini
Cc: Cornelia Huck
Cc: Amit Shah
---
drivers/virtio/virtio_balloon.c | 22 --
Add a new feature which supports sending the page information with
a bitmap. The current implementation uses PFNs array, which is not
very efficient. Using bitmap can improve the performance of
inflating/deflating significantly
The page bitmap header will used to tell the host some information
abo
On Thu, Oct 20, 2016 at 05:27:45PM -0400, Pan Xinhui wrote:
>
> This patch set aims to fix lock holder preemption issues.
Thanks, this looks very good. I'll wait for ACKs from at least the KVM
people, since that was I think the most contentious patch.
Corrected xen-devel mailing list address, added other Xen maintainers
On 20/10/16 23:27, Pan Xinhui wrote:
> From: Juergen Gross
>
> Support the vcpu_is_preempted() functionality under Xen. This will
> enhance lock performance on overcommitted hosts (more runnable vcpus
> than physical cpus in t
On Thu, Oct 20, 2016 at 10:37:20PM -0400, Jarod Wilson wrote:
> On Thu, Oct 20, 2016 at 11:23:54PM +0300, Michael S. Tsirkin wrote:
> > On Thu, Oct 20, 2016 at 01:55:21PM -0400, Jarod Wilson wrote:
> ...
> > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > > index fad84f3..720
On Thu, Oct 20, 2016 at 11:23:54PM +0300, Michael S. Tsirkin wrote:
> On Thu, Oct 20, 2016 at 01:55:21PM -0400, Jarod Wilson wrote:
...
> > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > index fad84f3..720809f 100644
> > --- a/drivers/net/virtio_net.c
> > +++ b/drivers/net/vir
在 2016/10/21 09:23, Boqun Feng 写道:
On Thu, Oct 20, 2016 at 05:27:54PM -0400, Pan Xinhui wrote:
Commit ("x86, kvm: support vcpu preempted check") add one field "__u8
preempted" into struct kvm_steal_time. This field tells if one vcpu is
running or not.
It is zero if 1) some old KVM deos not su
On Thu, Oct 20, 2016 at 05:27:54PM -0400, Pan Xinhui wrote:
> Commit ("x86, kvm: support vcpu preempted check") add one field "__u8
> preempted" into struct kvm_steal_time. This field tells if one vcpu is
> running or not.
>
> It is zero if 1) some old KVM deos not support this filed. 2) the vcpu
On Thu, Oct 20, 2016 at 01:55:21PM -0400, Jarod Wilson wrote:
> hyperv_net:
> - set min/max_mtu, per Haiyang, after rndis_filter_device_add
>
> virtio_net:
> - set min/max_mtu
> - remove virtnet_change_mtu
> vmxnet3:
> - set min/max_mtu
>
> xen-netback:
> - min_mtu = 0, max_mtu = 65517
>
> xen-n
On Wed, 19 Oct 2016, Jarod Wilson wrote:
> hyperv_net:
> - set min/max_mtu
>
> virtio_net:
> - set min/max_mtu
> - remove virtnet_change_mtu
>
> vmxnet3:
> - set min/max_mtu
>
> CC: net...@vger.kernel.org
> CC: virtualization@lists.linux-foundation.org
> CC: "K. Y. Srinivasan"
> CC: Haiyang
> -Original Message-
> From: Jarod Wilson [mailto:ja...@redhat.com]
> Sent: Thursday, October 20, 2016 1:55 PM
> To: linux-ker...@vger.kernel.org
> Cc: Jarod Wilson ; net...@vger.kernel.org;
> virtualization@lists.linux-foundation.org; KY Srinivasan
> ; Haiyang Zhang ; Michael S.
> Tsirki
hyperv_net:
- set min/max_mtu, per Haiyang, after rndis_filter_device_add
virtio_net:
- set min/max_mtu
- remove virtnet_change_mtu
vmxnet3:
- set min/max_mtu
xen-netback:
- min_mtu = 0, max_mtu = 65517
xen-netfront:
- min_mtu = 0, max_mtu = 65535
unisys/visor:
- clean up defines a little to n
Commit ("x86, kvm: support vcpu preempted check") add one field "__u8
preempted" into struct kvm_steal_time. This field tells if one vcpu is
running or not.
It is zero if 1) some old KVM deos not support this filed. 2) the vcpu is
preempted. Other values means the vcpu has been preempted.
Signed-
From: Juergen Gross
Support the vcpu_is_preempted() functionality under Xen. This will
enhance lock performance on overcommitted hosts (more runnable vcpus
than physical cpus in the system) as doing busy waits for preempted
vcpus will hurt system performance far worse than early yielding.
A quic
From: Christian Borntraeger
this implements the s390 backend for commit
"kernel/sched: introduce vcpu preempted check interface"
by reworking the existing smp_vcpu_scheduled into
arch_vcpu_is_preempted. We can then also get rid of the
local cpu_is_preempted function by moving the
CIF_ENABLED_WAIT
Support the vcpu_is_preempted() functionality under KVM. This will
enhance lock performance on overcommitted hosts (more runnable vcpus
than physical cpus in the system) as doing busy waits for preempted
vcpus will hurt system performance far worse than early yielding.
Use one field of struct kvm_
This is to fix some lock holder preemption issues. Some other locks
implementation do a spin loop before acquiring the lock itself.
Currently kernel has an interface of bool vcpu_is_preempted(int cpu). It
takes the cpu as parameter and return true if the cpu is preempted.
Then kernel can break the
An over-committed guest with more vCPUs than pCPUs has a heavy overload in
the two spin_on_owner. This blames on the lock holder preemption issue.
Kernel has an interface bool vcpu_is_preempted(int cpu) to see if a vCPU is
currently running or not. So break the spin loops on true condition.
test-
This is to fix some lock holder preemption issues. Some other locks
implementation do a spin loop before acquiring the lock itself.
Currently kernel has an interface of bool vcpu_is_preempted(int cpu). It
takes the cpu as parameter and return true if the cpu is preempted. Then
kernel can break the
An over-committed guest with more vCPUs than pCPUs has a heavy overload in
osq_lock().
This is because vCPU A hold the osq lock and yield out, vCPU B wait per_cpu
node->locked to be set. IOW, vCPU B wait vCPU A to run and unlock the osq
lock.
Kernel has an interface bool vcpu_is_preempted(int cpu
This patch support to fix lock holder preemption issue.
For kernel users, we could use bool vcpu_is_preempted(int cpu) to detech if
one vcpu is preempted or not.
The default implementation is a macro defined by false. So compiler can
wrap it out if arch dose not support such vcpu pteempted check.
change from v4:
spilt x86 kvm vcpu preempted check into two patches.
add documentation patch.
add x86 vcpu preempted check patch under xen
add s390 vcpu preempted check patch
change from v3:
add x86 vcpu preempted check patch
change from v2:
no code
28 matches
Mail list logo