On 06/18/2012 06:15 PM, Avi Kivity wrote:
On 06/12/2012 05:48 AM, Xiao Guangrong wrote:
This parameter will be used in the later patch
-return hva_to_pfn(kvm, addr, atomic, async, write_fault, writable);
+return hva_to_pfn(kvm, slot, addr, atomic, async, write_fault,
+
On 06/18/2012 06:13 PM, Michael S. Tsirkin wrote:
On Mon, Jun 18, 2012 at 04:03:23PM +0800, Asias He wrote:
On 06/18/2012 03:46 PM, Rusty Russell wrote:
On Mon, 18 Jun 2012 14:53:10 +0800, Asias He as...@redhat.com wrote:
This patch introduces bio-based IO path for virtio-blk.
Why make it
On 06/18/2012 06:16 PM, Avi Kivity wrote:
On 06/12/2012 05:48 AM, Xiao Guangrong wrote:
This set of functions is only used to read data from host space, read is
a special case in the later patch
+/*
+ * The hva returned by this function is only allowed to be read.
+ * It should pair with
On 06/18/2012 06:21 PM, Michael S. Tsirkin wrote:
On Mon, Jun 18, 2012 at 02:53:10PM +0800, Asias He wrote:
+static void virtblk_make_request(struct request_queue *q, struct bio *bio)
+{
+ struct virtio_blk *vblk = q-queuedata;
+ unsigned int num, out = 0, in = 0;
+ struct
On 06/18/2012 06:05 PM, Rusty Russell wrote:
On Mon, 18 Jun 2012 16:03:23 +0800, Asias He as...@redhat.com wrote:
On 06/18/2012 03:46 PM, Rusty Russell wrote:
On Mon, 18 Jun 2012 14:53:10 +0800, Asias He as...@redhat.com wrote:
This patch introduces bio-based IO path for virtio-blk.
Why
On 06/18/2012 07:39 PM, Sasha Levin wrote:
On Mon, 2012-06-18 at 14:14 +0300, Dor Laor wrote:
On 06/18/2012 01:05 PM, Rusty Russell wrote:
On Mon, 18 Jun 2012 16:03:23 +0800, Asias Heas...@redhat.com wrote:
On 06/18/2012 03:46 PM, Rusty Russell wrote:
On Mon, 18 Jun 2012 14:53:10 +0800,
On 06/18/2012 06:58 PM, Stefan Hajnoczi wrote:
As long as the latency is decreasing that's good. But It's worth
keeping in mind that these percentages are probably wildly different
on real storage devices and/or qemu-kvm. What we don't know here is
whether this bottleneck matters in real
On 18/06/2012 16:09, Sasha Levin wrote:
Currently if VIRTIO_RING_F_INDIRECT_DESC is enabled we will
use indirect descriptors and allocate them using a simple
kmalloc().
This patch adds a cache which will allow indirect buffers under
a configurable size to be allocated from that cache instead.
The core tcg/kvm code for ppc64 now has at least the outline
capability to support pagesizes beyond the standard 4k and 16MB. The
CPUState is initialized with information advertising the available
pagesizes and their correct encodings, and under the right KVM setup
this will be populated with
More recent Power server chips (i.e. based on the 64 bit hash MMU)
support more than just the traditional 4k and 16M page sizes. This
can get quite complicated, because which page sizes are supported,
which combinations are supported within an MMU segment and how these
page sizes are encoded both
On 06/15/2012 02:31 PM, Takuya Yoshikawa wrote:
This restricts hva handling in mmu code and makes it easier to extend
kvm_handle_hva() so that it can treat a range of addresses later in this
patch series.
kvm_for_each_memslot(memslot, slots) {
- unsigned long start =
On 06/15/2012 02:32 PM, Takuya Yoshikawa wrote:
When guest's memory is backed by THP pages, MMU notifier needs to call
kvm_unmap_hva(), which in turn leads to kvm_handle_hva(), in a loop to
invalidate a range of pages which constitute one huge page:
for each guest page
for each
On Mon, 18 Jun 2012 15:11:42 +0300
Avi Kivity a...@redhat.com wrote:
kvm_for_each_memslot(memslot, slots) {
- gfn_t gfn = hva_to_gfn(hva, memslot);
+ gfn_t gfn = hva_to_gfn(start_hva, memslot);
+ gfn_t end_gfn = hva_to_gfn(end_hva, memslot);
These
Add support for std/ld emulation.
Signed-off-by: Varun Sethi varun.se...@freescale.com
---
arch/powerpc/kvm/emulate.c | 14 ++
1 files changed, 14 insertions(+), 0 deletions(-)
diff --git a/arch/powerpc/kvm/emulate.c b/arch/powerpc/kvm/emulate.c
index f90e86d..a04543a 100644
---
Add support for std/ld emulation.
Signed-off-by: Varun Sethi varun.se...@freescale.com
---
arch/powerpc/kvm/emulate.c | 14 ++
1 files changed, 14 insertions(+), 0 deletions(-)
diff --git a/arch/powerpc/kvm/emulate.c b/arch/powerpc/kvm/emulate.c
index f90e86d..a04543a 100644
---
-Original Message-
From: Wood Scott-B07421
Sent: Tuesday, June 19, 2012 2:19 AM
To: Sethi Varun-B16395
Cc: kvm-ppc@vger.kernel.org
Subject: Re: [PATCH] KVM: PPC: bookehv64: Add support for std/ld
emulation.
On 06/18/2012 03:42 PM, Varun Sethi wrote:
Add support for std/ld
Add support for std/ld emulation.
Signed-off-by: Varun Sethi varun.se...@freescale.com
---
arch/powerpc/kvm/emulate.c | 16
1 files changed, 16 insertions(+), 0 deletions(-)
diff --git a/arch/powerpc/kvm/emulate.c b/arch/powerpc/kvm/emulate.c
index f90e86d..ee04aba 100644
---
The core tcg/kvm code for ppc64 now has at least the outline
capability to support pagesizes beyond the standard 4k and 16MB. The
CPUState is initialized with information advertising the available
pagesizes and their correct encodings, and under the right KVM setup
this will be populated with
More recent Power server chips (i.e. based on the 64 bit hash MMU)
support more than just the traditional 4k and 16M page sizes. This
can get quite complicated, because which page sizes are supported,
which combinations are supported within an MMU segment and how these
page sizes are encoded both
101 - 119 of 119 matches
Mail list logo