[PATCH 6/9] memory: Flush coalesced MMIO on mapping and state changes

2012-09-11 Thread Marcelo Tosatti
From: Jan Kiszka Flush pending coalesced MMIO before performing mapping or state changes that could affect the event orderings or route the buffered requests to a wrong region. Signed-off-by: Jan Kiszka Signed-off-by: Marcelo Tosatti --- memory.c |1 + 1 files changed, 1 insertions(+), 0

[PATCH 7/9] VGA: Flush coalesced MMIO on related MMIO/PIO accesses

2012-09-11 Thread Marcelo Tosatti
From: Jan Kiszka In preparation of stopping to flush coalesced MMIO unconditionally on vmexits, mark VGA MMIO and PIO regions as synchronous /wrt coalesced MMIO and flush the buffer explicitly on PIO accesses that do not use generic memory regions yet. Signed-off-by: Jan Kiszka Signed-off-by

[PATCH 8/9] kvm: Stop flushing coalesced MMIO on vmexit

2012-09-11 Thread Marcelo Tosatti
From: Jan Kiszka The memory subsystem will now take care of flushing whenever affected regions are accessed or the memory mapping changes. Signed-off-by: Jan Kiszka Signed-off-by: Marcelo Tosatti --- kvm-all.c |2 -- 1 files changed, 0 insertions(+), 2 deletions(-) diff --git a/kvm-all.c

[PATCH 3/9] memory: Flush coalesced MMIO on selected region access

2012-09-11 Thread Marcelo Tosatti
From: Jan Kiszka Instead of flushing pending coalesced MMIO requests on every vmexit, this provides a mechanism to selectively flush when memory regions related to the coalesced one are accessed. This first of all includes the coalesced region itself but can also applied to other regions, e.g

Re: [PATCH v3 uq/master 0/6] kvm: Get coalesced MMIO flushing out of the hot-path

2012-08-23 Thread Marcelo Tosatti
inal description: > > We currently flush the coalesced MMIO buffer on every vmexit to > userspace. KVM only provides a single buffer per VM, so a central lock > is required to read from it. This is a contention point given a large > enough VCPU set. Moreover, we need to hold the BQL while r

[PATCH v3 uq/master 5/6] VGA: Flush coalesced MMIO on related MMIO/PIO accesses

2012-08-23 Thread Jan Kiszka
In preparation of stopping to flush coalesced MMIO unconditionally on vmexits, mark VGA MMIO and PIO regions as synchronous /wrt coalesced MMIO and flush the buffer explicitly on PIO accesses that do not use generic memory regions yet. Signed-off-by: Jan Kiszka --- hw/cirrus_vga.c |7

[PATCH v3 uq/master 0/6] kvm: Get coalesced MMIO flushing out of the hot-path

2012-08-23 Thread Jan Kiszka
This is just a repost, now targeting uq/master as agreed. No changes compared to v2 except that "i82378: Remove bogus MMIO coalescing" was dropped as it is already in QEMU upstream by now. Original description: We currently flush the coalesced MMIO buffer on every vmexit to userspace

[PATCH v3 uq/master 1/6] memory: Flush coalesced MMIO on selected region access

2012-08-23 Thread Jan Kiszka
Instead of flushing pending coalesced MMIO requests on every vmexit, this provides a mechanism to selectively flush when memory regions related to the coalesced one are accessed. This first of all includes the coalesced region itself but can also applied to other regions, e.g. of the same device

[PATCH v3 uq/master 4/6] memory: Flush coalesced MMIO on mapping and state changes

2012-08-23 Thread Jan Kiszka
Flush pending coalesced MMIO before performing mapping or state changes that could affect the event orderings or route the buffered requests to a wrong region. Signed-off-by: Jan Kiszka --- memory.c |1 + 1 files changed, 1 insertions(+), 0 deletions(-) diff --git a/memory.c b/memory.c

[PATCH v3 uq/master 6/6] kvm: Stop flushing coalesced MMIO on vmexit

2012-08-23 Thread Jan Kiszka
The memory subsystem will now take care of flushing whenever affected regions are accessed or the memory mapping changes. Signed-off-by: Jan Kiszka --- kvm-all.c |2 -- 1 files changed, 0 insertions(+), 2 deletions(-) diff --git a/kvm-all.c b/kvm-all.c index e0244b6..432b84f 100644 --- a/kv

Re: [PATCH v3 2/7] memory: Flush coalesced MMIO on selected region access

2012-08-19 Thread Avi Kivity
On 08/17/2012 01:55 PM, Jan Kiszka wrote: > On 2012-07-10 12:41, Jan Kiszka wrote: >> On 2012-07-02 11:07, Avi Kivity wrote: >>> On 06/29/2012 07:37 PM, Jan Kiszka wrote: >>>> Instead of flushing pending coalesced MMIO requests on every vmexit, >>>> this

Re: [PATCH v3 2/7] memory: Flush coalesced MMIO on selected region access

2012-08-17 Thread Jan Kiszka
On 2012-07-10 12:41, Jan Kiszka wrote: > On 2012-07-02 11:07, Avi Kivity wrote: >> On 06/29/2012 07:37 PM, Jan Kiszka wrote: >>> Instead of flushing pending coalesced MMIO requests on every vmexit, >>> this provides a mechanism to selectively flush when memory regions &

Re: [PATCH v3 2/7] memory: Flush coalesced MMIO on selected region access

2012-07-10 Thread Jan Kiszka
On 2012-07-02 11:07, Avi Kivity wrote: > On 06/29/2012 07:37 PM, Jan Kiszka wrote: >> Instead of flushing pending coalesced MMIO requests on every vmexit, >> this provides a mechanism to selectively flush when memory regions >> related to the coalesced one are accessed. This

Re: [PATCH v3 2/7] memory: Flush coalesced MMIO on selected region access

2012-07-02 Thread Avi Kivity
On 07/02/2012 12:07 PM, Avi Kivity wrote: > > Reviewed-by: Avi Kivity (for the entire patchset) -- error compiling committee.c: too many arguments to function -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordo

Re: [PATCH v3 2/7] memory: Flush coalesced MMIO on selected region access

2012-07-02 Thread Avi Kivity
On 06/29/2012 07:37 PM, Jan Kiszka wrote: > Instead of flushing pending coalesced MMIO requests on every vmexit, > this provides a mechanism to selectively flush when memory regions > related to the coalesced one are accessed. This first of all includes > the coalesced region itself

[PATCH v3 2/7] memory: Flush coalesced MMIO on selected region access

2012-06-29 Thread Jan Kiszka
Instead of flushing pending coalesced MMIO requests on every vmexit, this provides a mechanism to selectively flush when memory regions related to the coalesced one are accessed. This first of all includes the coalesced region itself but can also applied to other regions, e.g. of the same device

Re: [PATCH v2 0/7] kvm: Get coalesced MMIO flushing out of the hot-path

2012-06-28 Thread Avi Kivity
ernally > - flush coalesced MMIO only on memory_region_transaction_begin > > Original description: > > We currently flush the coalesced MMIO buffer on every vmexit to > userspace. KVM only provides a single buffer per VM, so a central lock > is required to read from it. T

Re: [PATCH v2 2/7] memory: Flush coalesced MMIO on selected region access

2012-06-28 Thread Avi Kivity
On 06/27/2012 07:27 PM, Jan Kiszka wrote: > Instead of flushing pending coalesced MMIO requests on every vmexit, > this provides a mechanism to selectively flush when memory regions > related to the coalesced one are accessed. This first of all includes > the coalesced region itself

[PATCH v2 7/7] kvm: Stop flushing coalesced MMIO on vmexit

2012-06-27 Thread Jan Kiszka
The memory subsystem will now take care of flushing whenever affected regions are accessed or the memory mapping changes. Signed-off-by: Jan Kiszka --- kvm-all.c |2 -- 1 files changed, 0 insertions(+), 2 deletions(-) diff --git a/kvm-all.c b/kvm-all.c index f8e4328..a1d32f6 100644 --- a/kv

[PATCH v2 5/7] memory: Flush coalesced MMIO on mapping and state changes

2012-06-27 Thread Jan Kiszka
Flush pending coalesced MMIO before performing mapping or state changes that could affect the event orderings or route the buffered requests to a wrong region. Signed-off-by: Jan Kiszka --- memory.c |1 + 1 files changed, 1 insertions(+), 0 deletions(-) diff --git a/memory.c b/memory.c

[PATCH v2 6/7] VGA: Flush coalesced MMIO on related MMIO/PIO accesses

2012-06-27 Thread Jan Kiszka
In preparation of stopping to flush coalesced MMIO unconditionally on vmexits, mark VGA MMIO and PIO regions as synchronous /wrt coalesced MMIO and flush the buffer explicitly on PIO accesses that do not use generic memory regions yet. Signed-off-by: Jan Kiszka --- hw/cirrus_vga.c |7

[PATCH v2 2/7] memory: Flush coalesced MMIO on selected region access

2012-06-27 Thread Jan Kiszka
Instead of flushing pending coalesced MMIO requests on every vmexit, this provides a mechanism to selectively flush when memory regions related to the coalesced one are accessed. This first of all includes the coalesced region itself but can also applied to other regions, e.g. of the same device

[PATCH v2 0/7] kvm: Get coalesced MMIO flushing out of the hot-path

2012-06-27 Thread Jan Kiszka
Changes in v2: - added memory_region_clear_flush_coalesced - call memory_region_clear_flush_coalesced from memory_region_clear_coalescing - wrap all region manipulations via memory_region_transaction_begin/ commit internally - flush coalesced MMIO only on memory_region_transaction_begin

Re: [PATCH 3/5] memory: Flush coalesced MMIO on mapping and state changes

2012-06-25 Thread Jan Kiszka
On 2012-06-25 13:01, Avi Kivity wrote: > On 06/25/2012 01:26 PM, Jan Kiszka wrote: >> On 2012-06-25 12:15, Jan Kiszka wrote: >>> On 2012-06-25 10:57, Avi Kivity wrote: The repetitiveness of this code suggests a different way of doing this: make every API call be its own subtransaction and

Re: [PATCH 3/5] memory: Flush coalesced MMIO on mapping and state changes

2012-06-25 Thread Avi Kivity
On 06/25/2012 01:26 PM, Jan Kiszka wrote: > On 2012-06-25 12:15, Jan Kiszka wrote: >> On 2012-06-25 10:57, Avi Kivity wrote: >>> The repetitiveness of this code suggests a different way of doing this: >>> make every API call be its own subtransaction and perform the flush in >>> memory_region_begin

Re: [PATCH 3/5] memory: Flush coalesced MMIO on mapping and state changes

2012-06-25 Thread Jan Kiszka
On 2012-06-25 12:15, Jan Kiszka wrote: > On 2012-06-25 10:57, Avi Kivity wrote: >> The repetitiveness of this code suggests a different way of doing this: >> make every API call be its own subtransaction and perform the flush in >> memory_region_begin_transaction() (maybe that's the answer to my >>

Re: [PATCH 3/5] memory: Flush coalesced MMIO on mapping and state changes

2012-06-25 Thread Jan Kiszka
On 2012-06-25 10:57, Avi Kivity wrote: > On 06/25/2012 10:01 AM, Jan Kiszka wrote: >> Flush pending coalesced MMIO before performing mapping or state changes >> that could affect the event orderings or route the buffered requests to >> a wrong region. >> >> Si

Re: [PATCH 3/5] memory: Flush coalesced MMIO on mapping and state changes

2012-06-25 Thread Avi Kivity
On 06/25/2012 10:01 AM, Jan Kiszka wrote: > Flush pending coalesced MMIO before performing mapping or state changes > that could affect the event orderings or route the buffered requests to > a wrong region. > > Signed-off-by: Jan Kiszka > > In addition, we als

Re: [Qemu-devel] [PATCH 3/5] memory: Flush coalesced MMIO on mapping and state changes

2012-06-25 Thread Andreas Färber
Am 25.06.2012 09:01, schrieb Jan Kiszka: > Flush pending coalesced MMIO before performing mapping or state changes > that could affect the event orderings or route the buffered requests to > a wrong region. > > Signed-off-by: Jan Kiszka > > In addition, we also have to S

Re: [PATCH 2/5] memory: Flush coalesced MMIO on selected region access

2012-06-25 Thread Avi Kivity
On 06/25/2012 10:00 AM, Jan Kiszka wrote: > Instead of flushing pending coalesced MMIO requests on every vmexit, > this provides a mechanism to selectively flush when memory regions > related to the coalesced one are accessed. This first of all includes > the coalesced region itself

[PATCH 0/5] kvm: Get coalesced MMIO flushing out of the hot-path

2012-06-25 Thread Jan Kiszka
We currently flush the coalesced MMIO buffer on every vmexit to userspace. KVM only provides a single buffer per VM, so a central lock is required to read from it. This is a contention point given a large enough VCPU set. Moreover, we need to hold the BQL while replaying the queued requests

[PATCH 5/5] kvm: Stop flushing coalesced MMIO on vmexit

2012-06-25 Thread Jan Kiszka
The memory subsystem will now take care of flushing whenever affected regions are accessed or the memory mapping changes. Signed-off-by: Jan Kiszka --- kvm-all.c |2 -- 1 files changed, 0 insertions(+), 2 deletions(-) diff --git a/kvm-all.c b/kvm-all.c index f8e4328..a1d32f6 100644 --- a/kv

[PATCH 2/5] memory: Flush coalesced MMIO on selected region access

2012-06-25 Thread Jan Kiszka
Instead of flushing pending coalesced MMIO requests on every vmexit, this provides a mechanism to selectively flush when memory regions related to the coalesced one are accessed. This first of all includes the coalesced region itself but can also applied to other regions, e.g. of the same device

[PATCH 3/5] memory: Flush coalesced MMIO on mapping and state changes

2012-06-25 Thread Jan Kiszka
Flush pending coalesced MMIO before performing mapping or state changes that could affect the event orderings or route the buffered requests to a wrong region. Signed-off-by: Jan Kiszka In addition, we also have to --- memory.c | 23 +++ 1 files changed, 23 insertions

[PATCH 4/5] VGA: Flush coalesced MMIO on related MMIO/PIO accesses

2012-06-25 Thread Jan Kiszka
In preparation of stopping to flush coalesced MMIO unconditionally on vmexits, mark VGA MMIO and PIO regions as synchronous /wrt coalesced MMIO and flush the buffer explicitly on PIO accesses that do not use generic memory regions yet. Signed-off-by: Jan Kiszka --- hw/cirrus_vga.c |7

Re: [PATCH v2] MMIO: Make coalesced mmio use a device per zone

2012-02-12 Thread Amos Kong
- Original Message - > - Original Message - > > On 07/19/2011 02:05 PM, Sasha Levin wrote: > > > On Tue, 2011-07-19 at 13:57 +0300, Avi Kivity wrote: > > > > On 07/19/2011 01:31 PM, Sasha Levin wrote: > > > > > This patch cha

Re: [PATCH v2] MMIO: Make coalesced mmio use a device per zone

2011-12-21 Thread Amos Kong
- Original Message - > On 07/19/2011 02:05 PM, Sasha Levin wrote: > > On Tue, 2011-07-19 at 13:57 +0300, Avi Kivity wrote: > > > On 07/19/2011 01:31 PM, Sasha Levin wrote: > > > > This patch changes coalesced mmio to create one mmio device > >

Re: [PATCH v4] MMIO: Make coalesced mmio use a device per zone

2011-07-22 Thread Marcelo Tosatti
On Wed, Jul 20, 2011 at 08:59:00PM +0300, Sasha Levin wrote: > This patch changes coalesced mmio to create one mmio device per > zone instead of handling all zones in one device. > > Doing so enables us to take advantage of existing locking and prevents > a race condition between

[PATCH v4] MMIO: Make coalesced mmio use a device per zone

2011-07-20 Thread Sasha Levin
This patch changes coalesced mmio to create one mmio device per zone instead of handling all zones in one device. Doing so enables us to take advantage of existing locking and prevents a race condition between coalesced mmio registration/unregistration and lookups. Cc: Avi Kivity Cc: Marcelo

Re: [PATCH v3] MMIO: Make coalesced mmio use a device per zone

2011-07-20 Thread Marcelo Tosatti
On Tue, Jul 19, 2011 at 04:00:07PM +0300, Sasha Levin wrote: > This patch changes coalesced mmio to create one mmio device per > zone instead of handling all zones in one device. > > Doing so enables us to take advantage of existing locking and prevents > a race condition between

Re: [PATCH] MMIO: Make coalesced mmio use a device per zone

2011-07-20 Thread Avi Kivity
On 07/20/2011 11:55 AM, Jan Kiszka wrote: On 2011-07-20 10:52, Avi Kivity wrote: > On 07/20/2011 11:43 AM, Jan Kiszka wrote: >>> >>> How do you implement this 3a, if your consumers are outside the main >>> process? I guess you could have an additional synchonize API (for >>> in-kernel con

Re: [PATCH] MMIO: Make coalesced mmio use a device per zone

2011-07-20 Thread Jan Kiszka
On 2011-07-20 10:52, Avi Kivity wrote: > On 07/20/2011 11:43 AM, Jan Kiszka wrote: >>> >>> How do you implement this 3a, if your consumers are outside the main >>> process? I guess you could have an additional synchonize API (for >>> in-kernel consumers) or RPC (for external process consumers),

Re: [PATCH] MMIO: Make coalesced mmio use a device per zone

2011-07-20 Thread Avi Kivity
On 07/20/2011 11:43 AM, Jan Kiszka wrote: > > How do you implement this 3a, if your consumers are outside the main > process? I guess you could have an additional synchonize API (for > in-kernel consumers) or RPC (for external process consumers), but then > this is no longer a simple API. I

Re: [PATCH] MMIO: Make coalesced mmio use a device per zone

2011-07-20 Thread Jan Kiszka
On 2011-07-20 10:24, Avi Kivity wrote: > On 07/19/2011 08:23 PM, Jan Kiszka wrote: >> On 2011-07-19 19:17, Avi Kivity wrote: >>> On 07/19/2011 08:14 PM, Jan Kiszka wrote: Another improvement - unfortunately less transparent for user space - would be to overcome the single ring buf

Re: [PATCH] MMIO: Make coalesced mmio use a device per zone

2011-07-20 Thread Avi Kivity
On 07/19/2011 08:23 PM, Jan Kiszka wrote: On 2011-07-19 19:17, Avi Kivity wrote: > On 07/19/2011 08:14 PM, Jan Kiszka wrote: >> >> Another improvement - unfortunately less transparent for user space - >> would be to overcome the single ring buffer that forces us to hold a >> central lock in u

Re: [PATCH] MMIO: Make coalesced mmio use a device per zone

2011-07-19 Thread Jan Kiszka
kittens are killed. >>>> >>>> I have this on our agenda, but I wouldn't be disappointed as well if >>>> someone else is faster. >>> >>> The socket mmio would have accomplished this as well. > > It's possible to process the

Re: [PATCH] MMIO: Make coalesced mmio use a device per zone

2011-07-19 Thread Sasha Levin
, but I wouldn't be disappointed as well if >>> someone else is faster. >> >> The socket mmio would have accomplished this as well. It's possible to process the coalesced mmio ring without waiting for an exit, no? Is the performance that bad? I would have thought it

Re: [PATCH] MMIO: Make coalesced mmio use a device per zone

2011-07-19 Thread Jan Kiszka
On 2011-07-19 19:17, Avi Kivity wrote: > On 07/19/2011 08:14 PM, Jan Kiszka wrote: >> >> Another improvement - unfortunately less transparent for user space - >> would be to overcome the single ring buffer that forces us to hold a >> central lock in user space while processing the entries. We rathe

Re: [PATCH] MMIO: Make coalesced mmio use a device per zone

2011-07-19 Thread Avi Kivity
On 07/19/2011 08:14 PM, Jan Kiszka wrote: Another improvement - unfortunately less transparent for user space - would be to overcome the single ring buffer that forces us to hold a central lock in user space while processing the entries. We rather need per-device rings. While waiting for coalesc

Re: [PATCH] MMIO: Make coalesced mmio use a device per zone

2011-07-19 Thread Jan Kiszka
ince we may want to do the same change to ioeventfds, >> which work the same way) - how would you feel if we make devices >> register range(s) and do a rbtree lookup instead of a linear search? >> > > It makes sense. In fact your change is a good first step - so far it &g

[PATCH v3] MMIO: Make coalesced mmio use a device per zone

2011-07-19 Thread Sasha Levin
This patch changes coalesced mmio to create one mmio device per zone instead of handling all zones in one device. Doing so enables us to take advantage of existing locking and prevents a race condition between coalesced mmio registration/unregistration and lookups. Cc: Avi Kivity Cc: Marcelo

Re: [PATCH v2] MMIO: Make coalesced mmio use a device per zone

2011-07-19 Thread Avi Kivity
On 07/19/2011 03:34 PM, Sasha Levin wrote: > > btw, don't we leak all zones on guest destruction? the array didn't need > any cleanup, but this list does. > No, the destructor is called for all devices on the bus when the bus is going down. We're handling it in coalesced_mmio_destructor() whic

Re: [PATCH v2] MMIO: Make coalesced mmio use a device per zone

2011-07-19 Thread Sasha Levin
On Tue, 2011-07-19 at 15:24 +0300, Avi Kivity wrote: > On 07/19/2011 02:05 PM, Sasha Levin wrote: > > On Tue, 2011-07-19 at 13:57 +0300, Avi Kivity wrote: > > > On 07/19/2011 01:31 PM, Sasha Levin wrote: > > > > This patch changes coalesced mmio to create one

Re: [PATCH v2] MMIO: Make coalesced mmio use a device per zone

2011-07-19 Thread Avi Kivity
On 07/19/2011 02:05 PM, Sasha Levin wrote: On Tue, 2011-07-19 at 13:57 +0300, Avi Kivity wrote: > On 07/19/2011 01:31 PM, Sasha Levin wrote: > > This patch changes coalesced mmio to create one mmio device per > > zone instead of handling all zones in one device. > > &

Re: [PATCH] MMIO: Make coalesced mmio use a device per zone

2011-07-19 Thread Sasha Levin
nt to do the same change to ioeventfds, > > > which work the same way) - how would you feel if we make devices > > > register range(s) and do a rbtree lookup instead of a linear search? > > > > > > > It makes sense. In fact your change is a good first step - so fa

Re: [PATCH] MMIO: Make coalesced mmio use a device per zone

2011-07-19 Thread Sasha Levin
it may increase > > significantly (since we may want to do the same change to ioeventfds, > > which work the same way) - how would you feel if we make devices > > register range(s) and do a rbtree lookup instead of a linear search? > > > > It makes sense. In fact you

Re: [PATCH v2] MMIO: Make coalesced mmio use a device per zone

2011-07-19 Thread Sasha Levin
On Tue, 2011-07-19 at 13:57 +0300, Avi Kivity wrote: > On 07/19/2011 01:31 PM, Sasha Levin wrote: > > This patch changes coalesced mmio to create one mmio device per > > zone instead of handling all zones in one device. > > > > Doing so enables us to take advantage of ex

Re: [PATCH v2] MMIO: Make coalesced mmio use a device per zone

2011-07-19 Thread Avi Kivity
On 07/19/2011 01:31 PM, Sasha Levin wrote: This patch changes coalesced mmio to create one mmio device per zone instead of handling all zones in one device. Doing so enables us to take advantage of existing locking and prevents a race condition between coalesced mmio registration/unregistration

[PATCH v2] MMIO: Make coalesced mmio use a device per zone

2011-07-19 Thread Sasha Levin
This patch changes coalesced mmio to create one mmio device per zone instead of handling all zones in one device. Doing so enables us to take advantage of existing locking and prevents a race condition between coalesced mmio registration/unregistration and lookups. Cc: Avi Kivity Cc: Marcelo

Re: [PATCH] MMIO: Make coalesced mmio use a device per zone

2011-07-19 Thread Avi Kivity
ds, which work the same way) - how would you feel if we make devices register range(s) and do a rbtree lookup instead of a linear search? It makes sense. In fact your change is a good first step - so far it was impossible to to a clever search since the seaching code was not aware of the ranges

Re: [PATCH] MMIO: Make coalesced mmio use a device per zone

2011-07-19 Thread Sasha Levin
On Tue, 2011-07-19 at 12:59 +0300, Avi Kivity wrote: > On 07/19/2011 12:53 PM, Sasha Levin wrote: > > > Make these per-guest instead of global. The lock may be contended, and > > > the list shouldn't hold items from different guests (why is it needed, > > > anyway?) > > > > > > > We only need t

Re: [PATCH] MMIO: Make coalesced mmio use a device per zone

2011-07-19 Thread Avi Kivity
On 07/19/2011 12:53 PM, Sasha Levin wrote: > Make these per-guest instead of global. The lock may be contended, and > the list shouldn't hold items from different guests (why is it needed, > anyway?) > We only need the list for removal, since we only have the range we want to remove, and we

Re: [PATCH] MMIO: Make coalesced mmio use a device per zone

2011-07-19 Thread Sasha Levin
On Tue, 2011-07-19 at 11:48 +0300, Avi Kivity wrote: > On 07/19/2011 11:10 AM, Sasha Levin wrote: > > This patch changes coalesced mmio to create one mmio device per > > zone instead of handling all zones in one device. > > > > Doing so enables us to take advantage of ex

Re: [PATCH] MMIO: Make coalesced mmio use a device per zone

2011-07-19 Thread Avi Kivity
On 07/19/2011 11:10 AM, Sasha Levin wrote: This patch changes coalesced mmio to create one mmio device per zone instead of handling all zones in one device. Doing so enables us to take advantage of existing locking and prevents a race condition between coalesced mmio registration/unregistration

[PATCH] MMIO: Make coalesced mmio use a device per zone

2011-07-19 Thread Sasha Levin
This patch changes coalesced mmio to create one mmio device per zone instead of handling all zones in one device. Doing so enables us to take advantage of existing locking and prevents a race condition between coalesced mmio registration/unregistration and lookups. Cc: Avi Kivity Cc: Marcelo

Re: Coalesced MMIO

2011-06-03 Thread Sasha Levin
On Fri, 2011-06-03 at 20:49 +0300, Sasha Levin wrote: > Hello, > > I've tried using KVM_REGISTER_COALESCED_MMIO to register a coalesced > MMIO zone. Looks like this issue was caused because I changed my KVM_MAX_VCPUS from 64 to 1024. -- Sasha. -- To unsubscribe from this li

Coalesced MMIO

2011-06-03 Thread Sasha Levin
Hello, I've tried using KVM_REGISTER_COALESCED_MMIO to register a coalesced MMIO zone. ioctl(KVM_CHECK_EXTENSION) for KVM_CAP_COALESCED_MMIO works properly (and returns 2). ioctl(KVM_REGISTER_COALESCED_MMIO) with the zone also works fine (and returns 0). What I see is that we stil

[PATCH 28/31] kvm: Flush coalesced mmio buffer on IO window exits

2011-01-24 Thread Marcelo Tosatti
From: Jan Kiszka We must flush pending mmio writes if we leave kvm_cpu_exec for an IO window. Otherwise we risk to loose those requests when migrating to a different host during that window. Signed-off-by: Jan Kiszka Signed-off-by: Marcelo Tosatti --- kvm-all.c |4 ++-- 1 files changed, 2

[PATCH 15/18] kvm: Flush coalesced mmio buffer on IO window exits

2011-01-21 Thread Jan Kiszka
From: Jan Kiszka We must flush pending mmio writes if we leave kvm_cpu_exec for an IO window. Otherwise we risk to loose those requests when migrating to a different host during that window. Signed-off-by: Jan Kiszka --- kvm-all.c |4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-)

[PATCH 32/35] kvm: Flush coalesced mmio buffer on IO window exits

2011-01-06 Thread Marcelo Tosatti
From: Jan Kiszka We must flush pending mmio writes if we leave kvm_cpu_exec for an IO window. Otherwise we risk to loose those requests when migrating to a different host during that window. Signed-off-by: Jan Kiszka Signed-off-by: Marcelo Tosatti --- kvm-all.c |4 ++-- 1 files changed, 2

[PATCH v3 19/21] kvm: Flush coalesced mmio buffer on IO window exits

2011-01-04 Thread Jan Kiszka
From: Jan Kiszka We must flush pending mmio writes if we leave kvm_cpu_exec for an IO window. Otherwise we risk to loose those requests when migrating to a different host during that window. Signed-off-by: Jan Kiszka --- kvm-all.c |4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-)

[PATCH 18/17] kvm: Flush coalesced mmio buffer on IO window exits

2011-01-03 Thread Jan Kiszka
From: Jan Kiszka We must flush pending mmio writes if we leave kvm_cpu_exec for an IO window. Otherwise we risk to loose those requests when migrating to a different host during that window. Signed-off-by: Jan Kiszka --- kvm-all.c |4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-)

Re: qemu vs. kvm: When to flush the coalesced mmio buffer?

2011-01-03 Thread Jan Kiszka
Am 03.01.2011 13:32, Avi Kivity wrote: > On 01/03/2011 02:11 PM, Jan Kiszka wrote: >> Hi again, >> >> another subtle difference between qemu-kvm and upstream: >> >> When we leave the guest for an IO window (KVM_RUN returns EINTR or >> EAGAIN), we call kvm_flush_coalesced_mmio_buffer in qemu-kvm but

Re: qemu vs. kvm: When to flush the coalesced mmio buffer?

2011-01-03 Thread Avi Kivity
On 01/03/2011 02:11 PM, Jan Kiszka wrote: Hi again, another subtle difference between qemu-kvm and upstream: When we leave the guest for an IO window (KVM_RUN returns EINTR or EAGAIN), we call kvm_flush_coalesced_mmio_buffer in qemu-kvm but not in upstream. When version is better? I can't find

Re: [Qemu-devel] qemu vs. kvm: When to flush the coalesced mmio buffer?

2011-01-03 Thread Gleb Natapov
n > upstream. When version is better? I can't find any rationales in both > git logs. > Since coalesced mmio is used to prevent unnecessary exits to userspace if vcpu thread is already in userspace why not flush coalesced mmio buffer? -- Gleb. -- To unsubscri

qemu vs. kvm: When to flush the coalesced mmio buffer?

2011-01-03 Thread Jan Kiszka
Hi again, another subtle difference between qemu-kvm and upstream: When we leave the guest for an IO window (KVM_RUN returns EINTR or EAGAIN), we call kvm_flush_coalesced_mmio_buffer in qemu-kvm but not in upstream. When version is better? I can't find any rationales in both git logs. Jan sig

[PATCH 8/8] kvm-all.c: define smp_wmb and use it for coalesced mmio

2010-02-22 Thread Marcelo Tosatti
Acked-by: "Michael S. Tsirkin" Signed-off-by: Marcelo Tosatti Signed-off-by: Avi Kivity --- kvm-all.c |3 ++- qemu-barrier.h |7 +++ 2 files changed, 9 insertions(+), 1 deletions(-) create mode 100644 qemu-barrier.h diff --git a/kvm-all.c b/kvm-all.c index 91d3cbd..1a02076 10

Re: [patch uq/master 2/2] kvm-all.c: define smp_wmb and use it for coalesced mmio

2010-02-22 Thread Avi Kivity
On 02/22/2010 06:57 PM, Marcelo Tosatti wrote: Acked-by: "Michael S. Tsirkin" Signed-off-by: Marcelo Tosatti Applied, thanks. -- error compiling committee.c: too many arguments to function -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord

Re: [patch uq/master 2/2] kvm-all.c: define smp_wmb and use it for coalesced mmio

2010-02-22 Thread Marcelo Tosatti
Acked-by: "Michael S. Tsirkin" Signed-off-by: Marcelo Tosatti Index: qemu/kvm-all.c === --- qemu.orig/kvm-all.c +++ qemu/kvm-all.c @@ -21,6 +21,7 @@ #include #include "qemu-common.h" +#include "qemu-barrier.h" #include "sysem

Re: [patch uq/master 2/2] kvm-all.c: define smp_wmb and use it for coalesced mmio

2010-02-22 Thread Avi Kivity
On 02/22/2010 05:08 PM, Michael S. Tsirkin wrote: I imagine all arches need an instruction. For reads as well. Note, gcc has a __sync_synchronize() builtin that compiles to mfence on x86. We might use that as a baseline for both rmb and wmb, and let each arch override it incrementally.

Re: [patch uq/master 2/2] kvm-all.c: define smp_wmb and use it for coalesced mmio

2010-02-22 Thread Michael S. Tsirkin
On Mon, Feb 22, 2010 at 05:08:00PM +0200, Avi Kivity wrote: > On 02/22/2010 04:57 PM, Michael S. Tsirkin wrote: >> >> There is no need (for this case). Older read cannot be reordered with write, writes are not reordered with other writes, writes by a single processor are observed

Re: [patch uq/master 2/2] kvm-all.c: define smp_wmb and use it for coalesced mmio

2010-02-22 Thread Avi Kivity
On 02/22/2010 04:57 PM, Michael S. Tsirkin wrote: There is no need (for this case). Older read cannot be reordered with write, writes are not reordered with other writes, writes by a single processor are observed in the same order by all processors. Well, Linux does use sfence.

Re: [patch uq/master 2/2] kvm-all.c: define smp_wmb and use it for coalesced mmio

2010-02-22 Thread Michael S. Tsirkin
On Mon, Feb 22, 2010 at 04:57:29PM +0200, Avi Kivity wrote: > On 02/22/2010 04:45 PM, Marcelo Tosatti wrote: >> On Mon, Feb 22, 2010 at 04:23:32PM +0200, Avi Kivity wrote: >> >>> On 02/22/2010 03:59 PM, Marcelo Tosatti wrote: >>> Cc: "Michael S. Tsirkin" Signed-off-by: Marcelo T

Re: [patch uq/master 2/2] kvm-all.c: define smp_wmb and use it for coalesced mmio

2010-02-22 Thread Avi Kivity
On 02/22/2010 04:45 PM, Marcelo Tosatti wrote: On Mon, Feb 22, 2010 at 04:23:32PM +0200, Avi Kivity wrote: On 02/22/2010 03:59 PM, Marcelo Tosatti wrote: Cc: "Michael S. Tsirkin" Signed-off-by: Marcelo Tosatti Index: qemu/kvm-all.c

Re: [patch uq/master 2/2] kvm-all.c: define smp_wmb and use it for coalesced mmio

2010-02-22 Thread Michael S. Tsirkin
On Mon, Feb 22, 2010 at 10:59:08AM -0300, Marcelo Tosatti wrote: > Cc: "Michael S. Tsirkin" > Signed-off-by: Marcelo Tosatti Acked-by: Michael S. Tsirkin We'll need implementation for other arches, I'll dust off my patch that adds it and repost, but for now this is better than what we have. >

Re: [patch uq/master 2/2] kvm-all.c: define smp_wmb and use it for coalesced mmio

2010-02-22 Thread Marcelo Tosatti
On Mon, Feb 22, 2010 at 04:23:32PM +0200, Avi Kivity wrote: > On 02/22/2010 03:59 PM, Marcelo Tosatti wrote: > >Cc: "Michael S. Tsirkin" > >Signed-off-by: Marcelo Tosatti > > > >Index: qemu/kvm-all.c > >=== > >--- qemu.orig/kvm-all.c >

Re: [patch uq/master 2/2] kvm-all.c: define smp_wmb and use it for coalesced mmio

2010-02-22 Thread Avi Kivity
On 02/22/2010 03:59 PM, Marcelo Tosatti wrote: Cc: "Michael S. Tsirkin" Signed-off-by: Marcelo Tosatti Index: qemu/kvm-all.c === --- qemu.orig/kvm-all.c +++ qemu/kvm-all.c @@ -718,6 +718,9 @@ static int kvm_handle_io(uint16_t port,

[patch uq/master 2/2] kvm-all.c: define smp_wmb and use it for coalesced mmio

2010-02-22 Thread Marcelo Tosatti
Cc: "Michael S. Tsirkin" Signed-off-by: Marcelo Tosatti Index: qemu/kvm-all.c === --- qemu.orig/kvm-all.c +++ qemu/kvm-all.c @@ -718,6 +718,9 @@ static int kvm_handle_io(uint16_t port, return 1; } +/* FIXME: arch dependant,

[PATCH 20/40] KVM: Simplify coalesced mmio initialization

2010-02-10 Thread Avi Kivity
- add destructor function - move related allocation into constructor - add stubs for !CONFIG_KVM_MMIO Signed-off-by: Avi Kivity --- virt/kvm/coalesced_mmio.c | 25 +++-- virt/kvm/coalesced_mmio.h | 10 ++ virt/kvm/kvm_main.c |7 +-- 3 files changed,

[PATCH 1/6] kvm: Flush coalesced MMIO buffer periodly

2010-02-03 Thread Marcelo Tosatti
From: Sheng Yang The default action of coalesced MMIO is, cache the writing in buffer, until: 1. The buffer is full. 2. Or the exit to QEmu due to other reasons. But this would result in a very late writing in some condition. 1. The each time write to MMIO content is small. 2. The writing

Re: [PATCH v3][uqmaster] kvm: Flush coalesced MMIO buffer periodly

2010-01-26 Thread Marcelo Tosatti
On Tue, Jan 26, 2010 at 07:21:16PM +0800, Sheng Yang wrote: > The default action of coalesced MMIO is, cache the writing in buffer, until: > 1. The buffer is full. > 2. Or the exit to QEmu due to other reasons. > > But this would result in a very late writing in some condition. >

[PATCH v3][uqmaster] kvm: Flush coalesced MMIO buffer periodly

2010-01-26 Thread Sheng Yang
The default action of coalesced MMIO is, cache the writing in buffer, until: 1. The buffer is full. 2. Or the exit to QEmu due to other reasons. But this would result in a very late writing in some condition. 1. The each time write to MMIO content is small. 2. The writing interval is big. 3. No

Re: [PATCH v2][uqmaster] kvm: Flush coalesced MMIO buffer periodly

2010-01-26 Thread Sheng Yang
On Tue, Jan 26, 2010 at 10:59:17AM +0100, Alexander Graf wrote: > > On 26.01.2010, at 10:41, Sheng Yang wrote: > > > --- a/kvm-all.c > > +++ b/kvm-all.c > > @@ -59,6 +59,7 @@ struct KVMState > > int vmfd; > > int regs_modified; > > int coalesced_mmio; > > +struct kvm_coalesced_mmi

Re: [PATCH v2][uqmaster] kvm: Flush coalesced MMIO buffer periodly

2010-01-26 Thread Alexander Graf
On 26.01.2010, at 10:41, Sheng Yang wrote: > The default action of coalesced MMIO is, cache the writing in buffer, until: > 1. The buffer is full. > 2. Or the exit to QEmu due to other reasons. > > But this would result in a very late writing in some condition. > 1. The each

[PATCH v2][uqmaster] kvm: Flush coalesced MMIO buffer periodly

2010-01-26 Thread Sheng Yang
The default action of coalesced MMIO is, cache the writing in buffer, until: 1. The buffer is full. 2. Or the exit to QEmu due to other reasons. But this would result in a very late writing in some condition. 1. The each time write to MMIO content is small. 2. The writing interval is big. 3. No

Re: [PATCH] kvm: Flush coalesced MMIO buffer periodly

2010-01-25 Thread Marcelo Tosatti
On Mon, Jan 25, 2010 at 03:46:44PM +0800, Sheng Yang wrote: > The default action of coalesced MMIO is, cache the writing in buffer, until: > 1. The buffer is full. > 2. Or the exit to QEmu due to other reasons. > > But this would result in a very late writing in some condition. >

[PATCH] kvm: Flush coalesced MMIO buffer periodly

2010-01-24 Thread Sheng Yang
The default action of coalesced MMIO is, cache the writing in buffer, until: 1. The buffer is full. 2. Or the exit to QEmu due to other reasons. But this would result in a very late writing in some condition. 1. The each time write to MMIO content is small. 2. The writing interval is big. 3. No

Re: [PATCH] kvm: Flush coalesced MMIO buffer periodly

2010-01-24 Thread Sheng Yang
On Sunday 24 January 2010 15:35:58 Avi Kivity wrote: > On 01/22/2010 04:22 AM, Sheng Yang wrote: > > The default action of coalesced MMIO is, cache the writing in buffer, > > until: 1. The buffer is full. > > 2. Or the exit to QEmu due to other reasons. > > > > But

Re: [PATCH] kvm: Flush coalesced MMIO buffer periodly

2010-01-23 Thread Avi Kivity
On 01/22/2010 04:22 AM, Sheng Yang wrote: The default action of coalesced MMIO is, cache the writing in buffer, until: 1. The buffer is full. 2. Or the exit to QEmu due to other reasons. But this would result in a very late writing in some condition. 1. The each time write to MMIO content is

[PATCH] kvm: Flush coalesced MMIO buffer periodly

2010-01-21 Thread Sheng Yang
The default action of coalesced MMIO is, cache the writing in buffer, until: 1. The buffer is full. 2. Or the exit to QEmu due to other reasons. But this would result in a very late writing in some condition. 1. The each time write to MMIO content is small. 2. The writing interval is big. 3. No

  1   2   >