From: Jan Kiszka
Flush pending coalesced MMIO before performing mapping or state changes
that could affect the event orderings or route the buffered requests to
a wrong region.
Signed-off-by: Jan Kiszka
Signed-off-by: Marcelo Tosatti
---
memory.c |1 +
1 files changed, 1 insertions(+), 0
From: Jan Kiszka
In preparation of stopping to flush coalesced MMIO unconditionally on
vmexits, mark VGA MMIO and PIO regions as synchronous /wrt coalesced
MMIO and flush the buffer explicitly on PIO accesses that do not use
generic memory regions yet.
Signed-off-by: Jan Kiszka
Signed-off-by
From: Jan Kiszka
The memory subsystem will now take care of flushing whenever affected
regions are accessed or the memory mapping changes.
Signed-off-by: Jan Kiszka
Signed-off-by: Marcelo Tosatti
---
kvm-all.c |2 --
1 files changed, 0 insertions(+), 2 deletions(-)
diff --git a/kvm-all.c
From: Jan Kiszka
Instead of flushing pending coalesced MMIO requests on every vmexit,
this provides a mechanism to selectively flush when memory regions
related to the coalesced one are accessed. This first of all includes
the coalesced region itself but can also applied to other regions, e.g
inal description:
>
> We currently flush the coalesced MMIO buffer on every vmexit to
> userspace. KVM only provides a single buffer per VM, so a central lock
> is required to read from it. This is a contention point given a large
> enough VCPU set. Moreover, we need to hold the BQL while r
In preparation of stopping to flush coalesced MMIO unconditionally on
vmexits, mark VGA MMIO and PIO regions as synchronous /wrt coalesced
MMIO and flush the buffer explicitly on PIO accesses that do not use
generic memory regions yet.
Signed-off-by: Jan Kiszka
---
hw/cirrus_vga.c |7
This is just a repost, now targeting uq/master as agreed. No changes
compared to v2 except that "i82378: Remove bogus MMIO coalescing" was
dropped as it is already in QEMU upstream by now.
Original description:
We currently flush the coalesced MMIO buffer on every vmexit to
userspace
Instead of flushing pending coalesced MMIO requests on every vmexit,
this provides a mechanism to selectively flush when memory regions
related to the coalesced one are accessed. This first of all includes
the coalesced region itself but can also applied to other regions, e.g.
of the same device
Flush pending coalesced MMIO before performing mapping or state changes
that could affect the event orderings or route the buffered requests to
a wrong region.
Signed-off-by: Jan Kiszka
---
memory.c |1 +
1 files changed, 1 insertions(+), 0 deletions(-)
diff --git a/memory.c b/memory.c
The memory subsystem will now take care of flushing whenever affected
regions are accessed or the memory mapping changes.
Signed-off-by: Jan Kiszka
---
kvm-all.c |2 --
1 files changed, 0 insertions(+), 2 deletions(-)
diff --git a/kvm-all.c b/kvm-all.c
index e0244b6..432b84f 100644
--- a/kv
On 08/17/2012 01:55 PM, Jan Kiszka wrote:
> On 2012-07-10 12:41, Jan Kiszka wrote:
>> On 2012-07-02 11:07, Avi Kivity wrote:
>>> On 06/29/2012 07:37 PM, Jan Kiszka wrote:
>>>> Instead of flushing pending coalesced MMIO requests on every vmexit,
>>>> this
On 2012-07-10 12:41, Jan Kiszka wrote:
> On 2012-07-02 11:07, Avi Kivity wrote:
>> On 06/29/2012 07:37 PM, Jan Kiszka wrote:
>>> Instead of flushing pending coalesced MMIO requests on every vmexit,
>>> this provides a mechanism to selectively flush when memory regions
&
On 2012-07-02 11:07, Avi Kivity wrote:
> On 06/29/2012 07:37 PM, Jan Kiszka wrote:
>> Instead of flushing pending coalesced MMIO requests on every vmexit,
>> this provides a mechanism to selectively flush when memory regions
>> related to the coalesced one are accessed. This
On 07/02/2012 12:07 PM, Avi Kivity wrote:
>
> Reviewed-by: Avi Kivity
(for the entire patchset)
--
error compiling committee.c: too many arguments to function
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordo
On 06/29/2012 07:37 PM, Jan Kiszka wrote:
> Instead of flushing pending coalesced MMIO requests on every vmexit,
> this provides a mechanism to selectively flush when memory regions
> related to the coalesced one are accessed. This first of all includes
> the coalesced region itself
Instead of flushing pending coalesced MMIO requests on every vmexit,
this provides a mechanism to selectively flush when memory regions
related to the coalesced one are accessed. This first of all includes
the coalesced region itself but can also applied to other regions, e.g.
of the same device
ernally
> - flush coalesced MMIO only on memory_region_transaction_begin
>
> Original description:
>
> We currently flush the coalesced MMIO buffer on every vmexit to
> userspace. KVM only provides a single buffer per VM, so a central lock
> is required to read from it. T
On 06/27/2012 07:27 PM, Jan Kiszka wrote:
> Instead of flushing pending coalesced MMIO requests on every vmexit,
> this provides a mechanism to selectively flush when memory regions
> related to the coalesced one are accessed. This first of all includes
> the coalesced region itself
The memory subsystem will now take care of flushing whenever affected
regions are accessed or the memory mapping changes.
Signed-off-by: Jan Kiszka
---
kvm-all.c |2 --
1 files changed, 0 insertions(+), 2 deletions(-)
diff --git a/kvm-all.c b/kvm-all.c
index f8e4328..a1d32f6 100644
--- a/kv
Flush pending coalesced MMIO before performing mapping or state changes
that could affect the event orderings or route the buffered requests to
a wrong region.
Signed-off-by: Jan Kiszka
---
memory.c |1 +
1 files changed, 1 insertions(+), 0 deletions(-)
diff --git a/memory.c b/memory.c
In preparation of stopping to flush coalesced MMIO unconditionally on
vmexits, mark VGA MMIO and PIO regions as synchronous /wrt coalesced
MMIO and flush the buffer explicitly on PIO accesses that do not use
generic memory regions yet.
Signed-off-by: Jan Kiszka
---
hw/cirrus_vga.c |7
Instead of flushing pending coalesced MMIO requests on every vmexit,
this provides a mechanism to selectively flush when memory regions
related to the coalesced one are accessed. This first of all includes
the coalesced region itself but can also applied to other regions, e.g.
of the same device
Changes in v2:
- added memory_region_clear_flush_coalesced
- call memory_region_clear_flush_coalesced from
memory_region_clear_coalescing
- wrap all region manipulations via memory_region_transaction_begin/
commit internally
- flush coalesced MMIO only on memory_region_transaction_begin
On 2012-06-25 13:01, Avi Kivity wrote:
> On 06/25/2012 01:26 PM, Jan Kiszka wrote:
>> On 2012-06-25 12:15, Jan Kiszka wrote:
>>> On 2012-06-25 10:57, Avi Kivity wrote:
The repetitiveness of this code suggests a different way of doing this:
make every API call be its own subtransaction and
On 06/25/2012 01:26 PM, Jan Kiszka wrote:
> On 2012-06-25 12:15, Jan Kiszka wrote:
>> On 2012-06-25 10:57, Avi Kivity wrote:
>>> The repetitiveness of this code suggests a different way of doing this:
>>> make every API call be its own subtransaction and perform the flush in
>>> memory_region_begin
On 2012-06-25 12:15, Jan Kiszka wrote:
> On 2012-06-25 10:57, Avi Kivity wrote:
>> The repetitiveness of this code suggests a different way of doing this:
>> make every API call be its own subtransaction and perform the flush in
>> memory_region_begin_transaction() (maybe that's the answer to my
>>
On 2012-06-25 10:57, Avi Kivity wrote:
> On 06/25/2012 10:01 AM, Jan Kiszka wrote:
>> Flush pending coalesced MMIO before performing mapping or state changes
>> that could affect the event orderings or route the buffered requests to
>> a wrong region.
>>
>> Si
On 06/25/2012 10:01 AM, Jan Kiszka wrote:
> Flush pending coalesced MMIO before performing mapping or state changes
> that could affect the event orderings or route the buffered requests to
> a wrong region.
>
> Signed-off-by: Jan Kiszka
>
> In addition, we als
Am 25.06.2012 09:01, schrieb Jan Kiszka:
> Flush pending coalesced MMIO before performing mapping or state changes
> that could affect the event orderings or route the buffered requests to
> a wrong region.
>
> Signed-off-by: Jan Kiszka
>
> In addition, we also have to
S
On 06/25/2012 10:00 AM, Jan Kiszka wrote:
> Instead of flushing pending coalesced MMIO requests on every vmexit,
> this provides a mechanism to selectively flush when memory regions
> related to the coalesced one are accessed. This first of all includes
> the coalesced region itself
We currently flush the coalesced MMIO buffer on every vmexit to
userspace. KVM only provides a single buffer per VM, so a central lock
is required to read from it. This is a contention point given a large
enough VCPU set. Moreover, we need to hold the BQL while replaying the
queued requests
The memory subsystem will now take care of flushing whenever affected
regions are accessed or the memory mapping changes.
Signed-off-by: Jan Kiszka
---
kvm-all.c |2 --
1 files changed, 0 insertions(+), 2 deletions(-)
diff --git a/kvm-all.c b/kvm-all.c
index f8e4328..a1d32f6 100644
--- a/kv
Instead of flushing pending coalesced MMIO requests on every vmexit,
this provides a mechanism to selectively flush when memory regions
related to the coalesced one are accessed. This first of all includes
the coalesced region itself but can also applied to other regions, e.g.
of the same device
Flush pending coalesced MMIO before performing mapping or state changes
that could affect the event orderings or route the buffered requests to
a wrong region.
Signed-off-by: Jan Kiszka
In addition, we also have to
---
memory.c | 23 +++
1 files changed, 23 insertions
In preparation of stopping to flush coalesced MMIO unconditionally on
vmexits, mark VGA MMIO and PIO regions as synchronous /wrt coalesced
MMIO and flush the buffer explicitly on PIO accesses that do not use
generic memory regions yet.
Signed-off-by: Jan Kiszka
---
hw/cirrus_vga.c |7
- Original Message -
> - Original Message -
> > On 07/19/2011 02:05 PM, Sasha Levin wrote:
> > > On Tue, 2011-07-19 at 13:57 +0300, Avi Kivity wrote:
> > > > On 07/19/2011 01:31 PM, Sasha Levin wrote:
> > > > > This patch cha
- Original Message -
> On 07/19/2011 02:05 PM, Sasha Levin wrote:
> > On Tue, 2011-07-19 at 13:57 +0300, Avi Kivity wrote:
> > > On 07/19/2011 01:31 PM, Sasha Levin wrote:
> > > > This patch changes coalesced mmio to create one mmio device
> >
On Wed, Jul 20, 2011 at 08:59:00PM +0300, Sasha Levin wrote:
> This patch changes coalesced mmio to create one mmio device per
> zone instead of handling all zones in one device.
>
> Doing so enables us to take advantage of existing locking and prevents
> a race condition between
This patch changes coalesced mmio to create one mmio device per
zone instead of handling all zones in one device.
Doing so enables us to take advantage of existing locking and prevents
a race condition between coalesced mmio registration/unregistration
and lookups.
Cc: Avi Kivity
Cc: Marcelo
On Tue, Jul 19, 2011 at 04:00:07PM +0300, Sasha Levin wrote:
> This patch changes coalesced mmio to create one mmio device per
> zone instead of handling all zones in one device.
>
> Doing so enables us to take advantage of existing locking and prevents
> a race condition between
On 07/20/2011 11:55 AM, Jan Kiszka wrote:
On 2011-07-20 10:52, Avi Kivity wrote:
> On 07/20/2011 11:43 AM, Jan Kiszka wrote:
>>>
>>> How do you implement this 3a, if your consumers are outside the main
>>> process? I guess you could have an additional synchonize API (for
>>> in-kernel con
On 2011-07-20 10:52, Avi Kivity wrote:
> On 07/20/2011 11:43 AM, Jan Kiszka wrote:
>>>
>>> How do you implement this 3a, if your consumers are outside the main
>>> process? I guess you could have an additional synchonize API (for
>>> in-kernel consumers) or RPC (for external process consumers),
On 07/20/2011 11:43 AM, Jan Kiszka wrote:
>
> How do you implement this 3a, if your consumers are outside the main
> process? I guess you could have an additional synchonize API (for
> in-kernel consumers) or RPC (for external process consumers), but then
> this is no longer a simple API.
I
On 2011-07-20 10:24, Avi Kivity wrote:
> On 07/19/2011 08:23 PM, Jan Kiszka wrote:
>> On 2011-07-19 19:17, Avi Kivity wrote:
>>> On 07/19/2011 08:14 PM, Jan Kiszka wrote:
Another improvement - unfortunately less transparent for user space -
would be to overcome the single ring buf
On 07/19/2011 08:23 PM, Jan Kiszka wrote:
On 2011-07-19 19:17, Avi Kivity wrote:
> On 07/19/2011 08:14 PM, Jan Kiszka wrote:
>>
>> Another improvement - unfortunately less transparent for user space -
>> would be to overcome the single ring buffer that forces us to hold a
>> central lock in u
kittens are killed.
>>>>
>>>> I have this on our agenda, but I wouldn't be disappointed as well if
>>>> someone else is faster.
>>>
>>> The socket mmio would have accomplished this as well.
>
> It's possible to process the
, but I wouldn't be disappointed as well if
>>> someone else is faster.
>>
>> The socket mmio would have accomplished this as well.
It's possible to process the coalesced mmio ring without waiting for
an exit, no? Is the performance that bad?
I would have thought it
On 2011-07-19 19:17, Avi Kivity wrote:
> On 07/19/2011 08:14 PM, Jan Kiszka wrote:
>>
>> Another improvement - unfortunately less transparent for user space -
>> would be to overcome the single ring buffer that forces us to hold a
>> central lock in user space while processing the entries. We rathe
On 07/19/2011 08:14 PM, Jan Kiszka wrote:
Another improvement - unfortunately less transparent for user space -
would be to overcome the single ring buffer that forces us to hold a
central lock in user space while processing the entries. We rather need
per-device rings. While waiting for coalesc
ince we may want to do the same change to ioeventfds,
>> which work the same way) - how would you feel if we make devices
>> register range(s) and do a rbtree lookup instead of a linear search?
>>
>
> It makes sense. In fact your change is a good first step - so far it
&g
This patch changes coalesced mmio to create one mmio device per
zone instead of handling all zones in one device.
Doing so enables us to take advantage of existing locking and prevents
a race condition between coalesced mmio registration/unregistration
and lookups.
Cc: Avi Kivity
Cc: Marcelo
On 07/19/2011 03:34 PM, Sasha Levin wrote:
>
> btw, don't we leak all zones on guest destruction? the array didn't need
> any cleanup, but this list does.
>
No, the destructor is called for all devices on the bus when the bus is
going down. We're handling it in coalesced_mmio_destructor() whic
On Tue, 2011-07-19 at 15:24 +0300, Avi Kivity wrote:
> On 07/19/2011 02:05 PM, Sasha Levin wrote:
> > On Tue, 2011-07-19 at 13:57 +0300, Avi Kivity wrote:
> > > On 07/19/2011 01:31 PM, Sasha Levin wrote:
> > > > This patch changes coalesced mmio to create one
On 07/19/2011 02:05 PM, Sasha Levin wrote:
On Tue, 2011-07-19 at 13:57 +0300, Avi Kivity wrote:
> On 07/19/2011 01:31 PM, Sasha Levin wrote:
> > This patch changes coalesced mmio to create one mmio device per
> > zone instead of handling all zones in one device.
> >
&
nt to do the same change to ioeventfds,
> > > which work the same way) - how would you feel if we make devices
> > > register range(s) and do a rbtree lookup instead of a linear search?
> > >
> >
> > It makes sense. In fact your change is a good first step - so fa
it may increase
> > significantly (since we may want to do the same change to ioeventfds,
> > which work the same way) - how would you feel if we make devices
> > register range(s) and do a rbtree lookup instead of a linear search?
> >
>
> It makes sense. In fact you
On Tue, 2011-07-19 at 13:57 +0300, Avi Kivity wrote:
> On 07/19/2011 01:31 PM, Sasha Levin wrote:
> > This patch changes coalesced mmio to create one mmio device per
> > zone instead of handling all zones in one device.
> >
> > Doing so enables us to take advantage of ex
On 07/19/2011 01:31 PM, Sasha Levin wrote:
This patch changes coalesced mmio to create one mmio device per
zone instead of handling all zones in one device.
Doing so enables us to take advantage of existing locking and prevents
a race condition between coalesced mmio registration/unregistration
This patch changes coalesced mmio to create one mmio device per
zone instead of handling all zones in one device.
Doing so enables us to take advantage of existing locking and prevents
a race condition between coalesced mmio registration/unregistration
and lookups.
Cc: Avi Kivity
Cc: Marcelo
ds,
which work the same way) - how would you feel if we make devices
register range(s) and do a rbtree lookup instead of a linear search?
It makes sense. In fact your change is a good first step - so far it
was impossible to to a clever search since the seaching code was not
aware of the ranges
On Tue, 2011-07-19 at 12:59 +0300, Avi Kivity wrote:
> On 07/19/2011 12:53 PM, Sasha Levin wrote:
> > > Make these per-guest instead of global. The lock may be contended, and
> > > the list shouldn't hold items from different guests (why is it needed,
> > > anyway?)
> > >
> >
> > We only need t
On 07/19/2011 12:53 PM, Sasha Levin wrote:
> Make these per-guest instead of global. The lock may be contended, and
> the list shouldn't hold items from different guests (why is it needed,
> anyway?)
>
We only need the list for removal, since we only have the range we want
to remove, and we
On Tue, 2011-07-19 at 11:48 +0300, Avi Kivity wrote:
> On 07/19/2011 11:10 AM, Sasha Levin wrote:
> > This patch changes coalesced mmio to create one mmio device per
> > zone instead of handling all zones in one device.
> >
> > Doing so enables us to take advantage of ex
On 07/19/2011 11:10 AM, Sasha Levin wrote:
This patch changes coalesced mmio to create one mmio device per
zone instead of handling all zones in one device.
Doing so enables us to take advantage of existing locking and prevents
a race condition between coalesced mmio registration/unregistration
This patch changes coalesced mmio to create one mmio device per
zone instead of handling all zones in one device.
Doing so enables us to take advantage of existing locking and prevents
a race condition between coalesced mmio registration/unregistration
and lookups.
Cc: Avi Kivity
Cc: Marcelo
On Fri, 2011-06-03 at 20:49 +0300, Sasha Levin wrote:
> Hello,
>
> I've tried using KVM_REGISTER_COALESCED_MMIO to register a coalesced
> MMIO zone.
Looks like this issue was caused because I changed my KVM_MAX_VCPUS from
64 to 1024.
--
Sasha.
--
To unsubscribe from this li
Hello,
I've tried using KVM_REGISTER_COALESCED_MMIO to register a coalesced
MMIO zone.
ioctl(KVM_CHECK_EXTENSION) for KVM_CAP_COALESCED_MMIO works properly
(and returns 2).
ioctl(KVM_REGISTER_COALESCED_MMIO) with the zone also works fine (and
returns 0).
What I see is that we stil
From: Jan Kiszka
We must flush pending mmio writes if we leave kvm_cpu_exec for an IO
window. Otherwise we risk to loose those requests when migrating to a
different host during that window.
Signed-off-by: Jan Kiszka
Signed-off-by: Marcelo Tosatti
---
kvm-all.c |4 ++--
1 files changed, 2
From: Jan Kiszka
We must flush pending mmio writes if we leave kvm_cpu_exec for an IO
window. Otherwise we risk to loose those requests when migrating to a
different host during that window.
Signed-off-by: Jan Kiszka
---
kvm-all.c |4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
From: Jan Kiszka
We must flush pending mmio writes if we leave kvm_cpu_exec for an IO
window. Otherwise we risk to loose those requests when migrating to a
different host during that window.
Signed-off-by: Jan Kiszka
Signed-off-by: Marcelo Tosatti
---
kvm-all.c |4 ++--
1 files changed, 2
From: Jan Kiszka
We must flush pending mmio writes if we leave kvm_cpu_exec for an IO
window. Otherwise we risk to loose those requests when migrating to a
different host during that window.
Signed-off-by: Jan Kiszka
---
kvm-all.c |4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
From: Jan Kiszka
We must flush pending mmio writes if we leave kvm_cpu_exec for an IO
window. Otherwise we risk to loose those requests when migrating to a
different host during that window.
Signed-off-by: Jan Kiszka
---
kvm-all.c |4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
Am 03.01.2011 13:32, Avi Kivity wrote:
> On 01/03/2011 02:11 PM, Jan Kiszka wrote:
>> Hi again,
>>
>> another subtle difference between qemu-kvm and upstream:
>>
>> When we leave the guest for an IO window (KVM_RUN returns EINTR or
>> EAGAIN), we call kvm_flush_coalesced_mmio_buffer in qemu-kvm but
On 01/03/2011 02:11 PM, Jan Kiszka wrote:
Hi again,
another subtle difference between qemu-kvm and upstream:
When we leave the guest for an IO window (KVM_RUN returns EINTR or
EAGAIN), we call kvm_flush_coalesced_mmio_buffer in qemu-kvm but not in
upstream. When version is better? I can't find
n
> upstream. When version is better? I can't find any rationales in both
> git logs.
>
Since coalesced mmio is used to prevent unnecessary exits to userspace
if vcpu thread is already in userspace why not flush coalesced mmio
buffer?
--
Gleb.
--
To unsubscri
Hi again,
another subtle difference between qemu-kvm and upstream:
When we leave the guest for an IO window (KVM_RUN returns EINTR or
EAGAIN), we call kvm_flush_coalesced_mmio_buffer in qemu-kvm but not in
upstream. When version is better? I can't find any rationales in both
git logs.
Jan
sig
Acked-by: "Michael S. Tsirkin"
Signed-off-by: Marcelo Tosatti
Signed-off-by: Avi Kivity
---
kvm-all.c |3 ++-
qemu-barrier.h |7 +++
2 files changed, 9 insertions(+), 1 deletions(-)
create mode 100644 qemu-barrier.h
diff --git a/kvm-all.c b/kvm-all.c
index 91d3cbd..1a02076 10
On 02/22/2010 06:57 PM, Marcelo Tosatti wrote:
Acked-by: "Michael S. Tsirkin"
Signed-off-by: Marcelo Tosatti
Applied, thanks.
--
error compiling committee.c: too many arguments to function
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord
Acked-by: "Michael S. Tsirkin"
Signed-off-by: Marcelo Tosatti
Index: qemu/kvm-all.c
===
--- qemu.orig/kvm-all.c
+++ qemu/kvm-all.c
@@ -21,6 +21,7 @@
#include
#include "qemu-common.h"
+#include "qemu-barrier.h"
#include "sysem
On 02/22/2010 05:08 PM, Michael S. Tsirkin wrote:
I imagine all arches need an instruction. For reads as well.
Note, gcc has a __sync_synchronize() builtin that compiles to mfence on
x86. We might use that as a baseline for both rmb and wmb, and let each
arch override it incrementally.
On Mon, Feb 22, 2010 at 05:08:00PM +0200, Avi Kivity wrote:
> On 02/22/2010 04:57 PM, Michael S. Tsirkin wrote:
>>
>>
There is no need (for this case). Older read cannot be reordered with
write, writes are not reordered with other writes, writes by a single
processor are observed
On 02/22/2010 04:57 PM, Michael S. Tsirkin wrote:
There is no need (for this case). Older read cannot be reordered with
write, writes are not reordered with other writes, writes by a single
processor are observed in the same order by all processors.
Well, Linux does use sfence.
On Mon, Feb 22, 2010 at 04:57:29PM +0200, Avi Kivity wrote:
> On 02/22/2010 04:45 PM, Marcelo Tosatti wrote:
>> On Mon, Feb 22, 2010 at 04:23:32PM +0200, Avi Kivity wrote:
>>
>>> On 02/22/2010 03:59 PM, Marcelo Tosatti wrote:
>>>
Cc: "Michael S. Tsirkin"
Signed-off-by: Marcelo T
On 02/22/2010 04:45 PM, Marcelo Tosatti wrote:
On Mon, Feb 22, 2010 at 04:23:32PM +0200, Avi Kivity wrote:
On 02/22/2010 03:59 PM, Marcelo Tosatti wrote:
Cc: "Michael S. Tsirkin"
Signed-off-by: Marcelo Tosatti
Index: qemu/kvm-all.c
On Mon, Feb 22, 2010 at 10:59:08AM -0300, Marcelo Tosatti wrote:
> Cc: "Michael S. Tsirkin"
> Signed-off-by: Marcelo Tosatti
Acked-by: Michael S. Tsirkin
We'll need implementation for other arches, I'll dust off
my patch that adds it and repost, but for now this
is better than what we have.
>
On Mon, Feb 22, 2010 at 04:23:32PM +0200, Avi Kivity wrote:
> On 02/22/2010 03:59 PM, Marcelo Tosatti wrote:
> >Cc: "Michael S. Tsirkin"
> >Signed-off-by: Marcelo Tosatti
> >
> >Index: qemu/kvm-all.c
> >===
> >--- qemu.orig/kvm-all.c
>
On 02/22/2010 03:59 PM, Marcelo Tosatti wrote:
Cc: "Michael S. Tsirkin"
Signed-off-by: Marcelo Tosatti
Index: qemu/kvm-all.c
===
--- qemu.orig/kvm-all.c
+++ qemu/kvm-all.c
@@ -718,6 +718,9 @@ static int kvm_handle_io(uint16_t port,
Cc: "Michael S. Tsirkin"
Signed-off-by: Marcelo Tosatti
Index: qemu/kvm-all.c
===
--- qemu.orig/kvm-all.c
+++ qemu/kvm-all.c
@@ -718,6 +718,9 @@ static int kvm_handle_io(uint16_t port,
return 1;
}
+/* FIXME: arch dependant,
- add destructor function
- move related allocation into constructor
- add stubs for !CONFIG_KVM_MMIO
Signed-off-by: Avi Kivity
---
virt/kvm/coalesced_mmio.c | 25 +++--
virt/kvm/coalesced_mmio.h | 10 ++
virt/kvm/kvm_main.c |7 +--
3 files changed,
From: Sheng Yang
The default action of coalesced MMIO is, cache the writing in buffer, until:
1. The buffer is full.
2. Or the exit to QEmu due to other reasons.
But this would result in a very late writing in some condition.
1. The each time write to MMIO content is small.
2. The writing
On Tue, Jan 26, 2010 at 07:21:16PM +0800, Sheng Yang wrote:
> The default action of coalesced MMIO is, cache the writing in buffer, until:
> 1. The buffer is full.
> 2. Or the exit to QEmu due to other reasons.
>
> But this would result in a very late writing in some condition.
>
The default action of coalesced MMIO is, cache the writing in buffer, until:
1. The buffer is full.
2. Or the exit to QEmu due to other reasons.
But this would result in a very late writing in some condition.
1. The each time write to MMIO content is small.
2. The writing interval is big.
3. No
On Tue, Jan 26, 2010 at 10:59:17AM +0100, Alexander Graf wrote:
>
> On 26.01.2010, at 10:41, Sheng Yang wrote:
>
> > --- a/kvm-all.c
> > +++ b/kvm-all.c
> > @@ -59,6 +59,7 @@ struct KVMState
> > int vmfd;
> > int regs_modified;
> > int coalesced_mmio;
> > +struct kvm_coalesced_mmi
On 26.01.2010, at 10:41, Sheng Yang wrote:
> The default action of coalesced MMIO is, cache the writing in buffer, until:
> 1. The buffer is full.
> 2. Or the exit to QEmu due to other reasons.
>
> But this would result in a very late writing in some condition.
> 1. The each
The default action of coalesced MMIO is, cache the writing in buffer, until:
1. The buffer is full.
2. Or the exit to QEmu due to other reasons.
But this would result in a very late writing in some condition.
1. The each time write to MMIO content is small.
2. The writing interval is big.
3. No
On Mon, Jan 25, 2010 at 03:46:44PM +0800, Sheng Yang wrote:
> The default action of coalesced MMIO is, cache the writing in buffer, until:
> 1. The buffer is full.
> 2. Or the exit to QEmu due to other reasons.
>
> But this would result in a very late writing in some condition.
>
The default action of coalesced MMIO is, cache the writing in buffer, until:
1. The buffer is full.
2. Or the exit to QEmu due to other reasons.
But this would result in a very late writing in some condition.
1. The each time write to MMIO content is small.
2. The writing interval is big.
3. No
On Sunday 24 January 2010 15:35:58 Avi Kivity wrote:
> On 01/22/2010 04:22 AM, Sheng Yang wrote:
> > The default action of coalesced MMIO is, cache the writing in buffer,
> > until: 1. The buffer is full.
> > 2. Or the exit to QEmu due to other reasons.
> >
> > But
On 01/22/2010 04:22 AM, Sheng Yang wrote:
The default action of coalesced MMIO is, cache the writing in buffer, until:
1. The buffer is full.
2. Or the exit to QEmu due to other reasons.
But this would result in a very late writing in some condition.
1. The each time write to MMIO content is
The default action of coalesced MMIO is, cache the writing in buffer, until:
1. The buffer is full.
2. Or the exit to QEmu due to other reasons.
But this would result in a very late writing in some condition.
1. The each time write to MMIO content is small.
2. The writing interval is big.
3. No
1 - 100 of 131 matches
Mail list logo