[Qemu-devel] [Bug 1444081] Re: x86_64 heavy crash on PPC 64 host

2015-04-14 Thread thh
A similar bug has been fixed already for rc3 (see
http://git.qemu.org/?p=qemu.git;a=commit;h=cf811fff2ae20008f00455d0ab2212a4dea0b56f
).

Could you please:

1) Try with rc3 to see whether it still happens there

2) Check whether your qemu binary is compiled as 32-bit or 64-bit
application? (running "file qemu-system-x86-64" should do the job)

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1444081

Title:
  x86_64 heavy crash on PPC 64 host

Status in QEMU:
  New

Bug description:
  this appened to me with last 2.3.0 rc 2
  qemu-system-x86-64 crash  , with only 2047 or 1024 -m option and -hda set

  qemu: fatal: Trying to execute code outside RAM or ROM at
  0x00181f9a000a

  EAX= EBX= ECX= EDX=0663
  ESI= EDI= EBP= ESP=
  EIP=0009fff3 EFL=0046 [---Z-P-] CPL=0 II=0 A20=1 SMM=0 HLT=0
  ES =   9300
  CS =f000   9b00
  SS =   9300
  DS =   9300
  FS =   9300
  GS =   9300
  LDT=   8200
  TR =   8b00
  GDT=  
  IDT=  
  CR0=6010 CR2= CR3= CR4=
  DR0= DR1= DR2= 
DR3=
  DR6=0ff0 DR7=0400
  CCS= CCD= CCO=ADDB
  EFER=
  FCW=037f FSW= [ST=0] FTW=00 MXCSR=1f80
  FPR0=  FPR1= 
  FPR2=  FPR3= 
  FPR4=  FPR5= 
  FPR6=  FPR7= 
  XMM00= XMM01=
  XMM02= XMM03=
  XMM04= XMM05=
  XMM06= XMM07=
  Annullato (core dump creato)

  Keep a good work

  My machine host
  G5 Quad , radeon hd 6570 2gb , 8gb ram ...
  host OS Lubuntu 14.04.2

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1444081/+subscriptions



Re: [Qemu-devel] Failing iotests in v2.3.0-rc2 / master

2015-04-14 Thread Jeff Cody
On Tue, Apr 14, 2015 at 11:57:35AM +0200, Kevin Wolf wrote:
> Am 11.04.2015 um 05:41 hat Andreas Färber geschrieben:
> > Hi,
> > 
> > 001 seems to hang for -qcow (or is not reasonably "quick": >5 min).
> > 
> > 033 is failing for -vhdx.
> > 
> > (Note that `make check-block` only tests -qcow2, so didn't uncover
> > either of them.)
> > 
> > Given a failing test, am I seeing correctly that there is no command
> > line option to skip this one failing test? -x seems to be for groups only.
> > 
> > Regards,
> > Andreas
> > 
> > $ ./check -v -T -qcow -g quick
> > [...]
> > 001 6s ...[05:12:39]
> 
> qcow1 is just really slow. 001 passes for me, after 202 seconds (that's
> on my SSD, YMMV).
> 
> > $ ./check -v -T -vhdx -g quick
> > [...]
> > 033 1s ...[04:06:09] [04:06:11] - output mismatch (see 033.out.bad)
> 
> This seems to be because blkdebug doesn't implement .bdrv_truncate.
> Currently the test case isn't suitable for VHDX, which uses explicit
> bdrv_truncate() calls to grow the image file. I'll send a patch for
> blkdebug to allow this.
> 
> However, it seems that there is another problem which causes assertion
> failures when using VHDX over blkdebug. Jeff, does the following fix
> make sense to you? (I think it does, but I don't understand yet why the
> assertion failure is only triggered with blkdebug - or in other words:
> "how could this ever work?")
> 
> Kevin

Kevin,

Yes, looking at that fix it makes sense - we are wanting to pad the
back part of the block after the actual data with zeros. That back
length should be (block size - (bytes avail + block offset)), which is
iov2.iov_len.

There are two reasons I think we haven't seen this issue (it has been
hidden):

1.) If bs->file supports zero init, we don't do any of this

2.) This is done for the case when the existing BAT state is
PAYLOAD_BLOCK_ZERO.  Until recently (commit 30af51c), we didn't
create VHDX files with blocks in the PAYLOAD_BLOCK_ZERO state.

So it has been a latent bug in a hitherto rarely (if ever) exercised
path.

Jeff
> 
> --- a/block/vhdx.c
> +++ b/block/vhdx.c
> @@ -1285,7 +1285,7 @@ static coroutine_fn int vhdx_co_writev(BlockDriverState 
> *bs, int64_t sector_num,
>  iov2.iov_base = qemu_blockalign(bs, iov2.iov_len);
>  memset(iov2.iov_base, 0, iov2.iov_len);
>  qemu_iovec_concat_iov(&hd_qiov, &iov2, 1, 0,
> -  sinfo.block_offset);
> +  iov2.iov_len);
>  sectors_to_write += iov2.iov_len >> BDRV_SECTOR_BITS;
>  }
>  }



[Qemu-devel] [PATCH v1 0/1] s390 pci infrastucture modeling

2015-04-14 Thread Hong Bo Li
This patch extends the current s390 pci implementation to
provide more flexibility in configuration of s390 specific
device handling. For this we had to introduce a new facility
(and bus) to hold devices representing information actually
provided by s390 firmware and I/O configuration.

For each vfio pci device, I create a zpci device to store s390
specific informations. And attach all of these special zpci devices
to the s390 facility bus. A zpci device references the corresponding
PCI device via device id. 

Compare to the old implementation, I moved the actual hotplug/unplug 
codes to s390 pci device hot plug function. Then in the pcihost 
hotplug function, we don't need to do anything special. In the pcihost
unplug function, we need to unplug the corresponding zpci device.

The new design allows to define multiple host bridges, each host bridge
could hold 32 zpci devices at most.

The topology for this implementation could be:

  dev: s390-pcihost, id ""
bus: pci.0
  type PCI
  dev: vfio-pci, id "vpci1"
host = ":00:00.0"
..
  dev: vfio-pci, id "vpci2"
host = "0001:00:00.0"
..
  dev: s390-pci-facility, id ""
bus: s390-pci-fac-bus.0
  type s390-pci-fac-bus
  dev: zpci, id "zpci1"
fid = 1 (0x1)
uid = 2 (0x2)
pci_id = "vpci1"
  dev: zpci, id "zpci2"
fid = 6 (0x6)
uid = 7 (0x7)
pci_id = "vpci2"

To make the review easier, I keep all of the old names, such as 
S390PCIBusDevice to name a zpci device. I will make a cleanup 
patch later to change these names to a more suitable name.

Hong Bo Li (1):
  s390 pci infrastructure modeling

 hw/s390x/s390-pci-bus.c| 317 +
 hw/s390x/s390-pci-bus.h|  48 ++-
 hw/s390x/s390-pci-inst.c   |   4 +-
 hw/s390x/s390-virtio-ccw.c |   4 +-
 4 files changed, 285 insertions(+), 88 deletions(-)

-- 
1.9.3





[Qemu-devel] [PATCH v1 1/1] s390 pci infrastructure modeling

2015-04-14 Thread Hong Bo Li
This patch contains the actual interesting changes.
usage example:
-device s390-pcihost
-device vfio-pci,host=:00:00.0,id=vpci1
-device zpci,fid=2,uid=5,pci_id=vpci1,id=zpci1

The first line will create a s390 pci host bridge
and init the root bus. The second line will create
a standard vfio pci device, and attach it to the
root bus. These are similiar to the standard process
to define a pci device on other platform.

The third line will create a s390 pci device to
store s390 specific information, and references
the corresponding vfio pci device via device id.
We create a s390 pci facility bus to hold all the
zpci devices.

Signed-off-by: Hong Bo Li 
---
 hw/s390x/s390-pci-bus.c| 317 +
 hw/s390x/s390-pci-bus.h|  48 ++-
 hw/s390x/s390-pci-inst.c   |   4 +-
 hw/s390x/s390-virtio-ccw.c |   4 +-
 4 files changed, 285 insertions(+), 88 deletions(-)

diff --git a/hw/s390x/s390-pci-bus.c b/hw/s390x/s390-pci-bus.c
index 3c086f6..c81093e 100644
--- a/hw/s390x/s390-pci-bus.c
+++ b/hw/s390x/s390-pci-bus.c
@@ -32,8 +32,8 @@ int chsc_sei_nt2_get_event(void *res)
 PciCcdfErr *eccdf;
 int rc = 1;
 SeiContainer *sei_cont;
-S390pciState *s = S390_PCI_HOST_BRIDGE(
-object_resolve_path(TYPE_S390_PCI_HOST_BRIDGE, NULL));
+S390PCIFacility *s = S390_PCI_FACILITY(
+object_resolve_path(TYPE_S390_PCI_FACILITY, NULL));
 
 if (!s) {
 return rc;
@@ -72,8 +72,8 @@ int chsc_sei_nt2_get_event(void *res)
 
 int chsc_sei_nt2_have_event(void)
 {
-S390pciState *s = S390_PCI_HOST_BRIDGE(
-object_resolve_path(TYPE_S390_PCI_HOST_BRIDGE, NULL));
+S390PCIFacility *s = S390_PCI_FACILITY(
+object_resolve_path(TYPE_S390_PCI_FACILITY, NULL));
 
 if (!s) {
 return 0;
@@ -82,20 +82,32 @@ int chsc_sei_nt2_have_event(void)
 return !QTAILQ_EMPTY(&s->pending_sei);
 }
 
+void s390_pci_device_enable(S390PCIBusDevice *zpci)
+{
+zpci->fh = zpci->fh | 1 << ENABLE_BIT_OFFSET;
+}
+
+void s390_pci_device_disable(S390PCIBusDevice *zpci)
+{
+zpci->fh = zpci->fh & ~(1 << ENABLE_BIT_OFFSET);
+if (zpci->is_unplugged)
+object_unparent(OBJECT(zpci));
+}
+
 S390PCIBusDevice *s390_pci_find_dev_by_fid(uint32_t fid)
 {
 S390PCIBusDevice *pbdev;
-int i;
-S390pciState *s = S390_PCI_HOST_BRIDGE(
-object_resolve_path(TYPE_S390_PCI_HOST_BRIDGE, NULL));
+BusChild *kid;
+S390PCIFacility *s = S390_PCI_FACILITY(
+object_resolve_path(TYPE_S390_PCI_FACILITY, NULL));
 
 if (!s) {
 return NULL;
 }
 
-for (i = 0; i < PCI_SLOT_MAX; i++) {
-pbdev = &s->pbdev[i];
-if ((pbdev->fh != 0) && (pbdev->fid == fid)) {
+QTAILQ_FOREACH(kid, &s->fbus->qbus.children, sibling) {
+pbdev = (S390PCIBusDevice *)kid->child;
+if (pbdev->fid == fid) {
 return pbdev;
 }
 }
@@ -126,39 +138,24 @@ void s390_pci_sclp_configure(int configure, SCCB *sccb)
 return;
 }
 
-static uint32_t s390_pci_get_pfid(PCIDevice *pdev)
-{
-return PCI_SLOT(pdev->devfn);
-}
-
-static uint32_t s390_pci_get_pfh(PCIDevice *pdev)
-{
-return PCI_SLOT(pdev->devfn) | FH_VIRT;
-}
-
 S390PCIBusDevice *s390_pci_find_dev_by_idx(uint32_t idx)
 {
 S390PCIBusDevice *pbdev;
-int i;
-int j = 0;
-S390pciState *s = S390_PCI_HOST_BRIDGE(
-object_resolve_path(TYPE_S390_PCI_HOST_BRIDGE, NULL));
+BusChild *kid;
+int i = 0;
+S390PCIFacility *s = S390_PCI_FACILITY(
+object_resolve_path(TYPE_S390_PCI_FACILITY, NULL));
 
 if (!s) {
 return NULL;
 }
 
-for (i = 0; i < PCI_SLOT_MAX; i++) {
-pbdev = &s->pbdev[i];
-
-if (pbdev->fh == 0) {
-continue;
-}
-
-if (j == idx) {
+QTAILQ_FOREACH(kid, &s->fbus->qbus.children, sibling) {
+pbdev = (S390PCIBusDevice *)kid->child;
+if (i == idx) {
 return pbdev;
 }
-j++;
+i++;
 }
 
 return NULL;
@@ -167,16 +164,17 @@ S390PCIBusDevice *s390_pci_find_dev_by_idx(uint32_t idx)
 S390PCIBusDevice *s390_pci_find_dev_by_fh(uint32_t fh)
 {
 S390PCIBusDevice *pbdev;
-int i;
-S390pciState *s = S390_PCI_HOST_BRIDGE(
-object_resolve_path(TYPE_S390_PCI_HOST_BRIDGE, NULL));
+BusChild *kid;
+S390PCIFacility *s = S390_PCI_FACILITY(
+object_resolve_path(TYPE_S390_PCI_FACILITY, NULL));
+
 
 if (!s || !fh) {
 return NULL;
 }
 
-for (i = 0; i < PCI_SLOT_MAX; i++) {
-pbdev = &s->pbdev[i];
+QTAILQ_FOREACH(kid, &s->fbus->qbus.children, sibling) {
+pbdev = (S390PCIBusDevice *)kid->child;
 if (pbdev->fh == fh) {
 return pbdev;
 }
@@ -185,12 +183,33 @@ S390PCIBusDevice *s390_pci_find_dev_by_fh(uint32_t fh)
 return NULL;
 }
 
+static S390PCIBusDevice *s390_pci_find_dev_by_pdev(PCIDevice *pdev)
+{
+S390PCIBusDevice *pbdev;
+BusChild *kid;
+S390PCIFacility *s = S390_PCI_FACILITY(

Re: [Qemu-devel] [PATCH qemu v6 02/15] vmstate: Define VARRAY with VMS_ALLOC

2015-04-14 Thread David Gibson
On Sat, Apr 11, 2015 at 01:24:31AM +1000, Alexey Kardashevskiy wrote:
> This allows dynamic allocation for migrating arrays.
> 
> Already existing VMSTATE_VARRAY_UINT32 requires an array to be
> pre-allocated, however there are cases when the size is not known in
> advance and there is no real need to enforce it.
> 
> This defines another variant of VMSTATE_VARRAY_UINT32 with WMS_ALLOC
> flag which tells the receiving side to allocate memory for the array
> before receiving the data.
> 
> The first user of it is a dynamic DMA window which existence and size
> are totally dynamic.
> 
> Signed-off-by: Alexey Kardashevskiy 

Reviewed-by: David Gibson 

-- 
David Gibson| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au  | minimalist, thank you.  NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson


pgp4M4Njr_SDr.pgp
Description: PGP signature


[Qemu-devel] [PATCH] vhost: pass corrent log base to kernel

2015-04-14 Thread Wen Congyang
Signed-off-by: Wen Congyang 
---
 hw/virtio/vhost.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
index 5a12861..4e334ca 100644
--- a/hw/virtio/vhost.c
+++ b/hw/virtio/vhost.c
@@ -1060,7 +1060,7 @@ int vhost_dev_start(struct vhost_dev *hdev, VirtIODevice 
*vdev)
 hdev->log_size = vhost_get_log_size(hdev);
 hdev->log = hdev->log_size ?
 g_malloc0(hdev->log_size * sizeof *hdev->log) : NULL;
-r = hdev->vhost_ops->vhost_call(hdev, VHOST_SET_LOG_BASE, hdev->log);
+r = hdev->vhost_ops->vhost_call(hdev, VHOST_SET_LOG_BASE, &hdev->log);
 if (r < 0) {
 r = -errno;
 goto fail_log;
-- 
2.1.0



Re: [Qemu-devel] [Question]Support of China loogson processor

2015-04-14 Thread vt
On 2015/4/15 9:08, Rob Landley  wrote:
> On Mon, Apr 13, 2015 at 6:29 AM, vt  wrote:
>> Hi, guys
>>
>> I saw the architecture code about mips in the qemu and kvm modules, so it is
>> no doubt that mips cpu can be supported.
>
> It looks like the 32 bit one should work fine. I haven't played with
> 64 bit yet but there's some support for it in the tree, give it a try?
>
>   http://en.wikipedia.org/wiki/Loongson
>
> Heh. The background on the "4 patented instructions" mentioned above
> is mips' lawsuit against Lexra many years ago:
>
>   http://landley.net/notes-2009.html#14-12-2009
>
> If you were wondering why mips had a lost decade where most of its
> customers switched over to arm, convincing the world you're a patent
> troll will do that. But it's been well over a decade and most people
> seem to have forgotten now. And china never cared about US
> intellectual property infighting anyway...
>
>> But I wonder if anyone have used qemu/kvm virtualization with China loongson
>> processor (MIPS architecture) without modification of qemu/kvm code.
>> All the infomation I have searched in the Internet can't answer my question.
>
> I have a mips r4k system emulation working fine at:
>
>   http://landley.net/aboriginal/bin/system-image-mips.tar.gz
>
> (That's based off of linux 3.18 I think, I have 3.19 building locally,
> 4.0 is on the todo list.)
>
> I haven't tried 64 bit yet but:
>
> $ qemu-system-mips64 -cpu ? | grep Loongson
> MIPS 'Loongson-2E'
> MIPS 'Loongson-2F'
>
> It's apparently there...
>
> Rob
>

Rob, Thanks for your help.

Sangfor VT

Re: [Qemu-devel] [PATCH v4 01/20] hw/i386: Move ACPI header definitions in an arch-independent location

2015-04-14 Thread Shannon Zhao
On 2015/4/3 18:03, Shannon Zhao wrote:
> From: Shannon Zhao 
> 
> The ACPI related header file acpi-defs.h, includes definitions that
> apply on other architectures as well. Move it in `include/hw/acpi/`
> to sanely include it from other architectures.
> 
> Signed-off-by: Alvise Rigo 
> Signed-off-by: Shannon Zhao 
> Signed-off-by: Shannon Zhao 
> ---
>  hw/i386/acpi-build.c|   2 +-
>  hw/i386/acpi-defs.h | 368 
> 
>  include/hw/acpi/acpi-defs.h | 368 
> 
>  tests/bios-tables-test.c|   2 +-
>  4 files changed, 370 insertions(+), 370 deletions(-)
>  delete mode 100644 hw/i386/acpi-defs.h
>  create mode 100644 include/hw/acpi/acpi-defs.h

Hi Igor, Michael,

Could you help review the patch 01, 02 of this patchset as they are releated to 
x86 ACPI?

-- 
Thanks,
Shannon




[Qemu-devel] [PATCH] s390x: Fix stoc direction

2015-04-14 Thread Alexander Graf
The store conditional instruction wants to store when the condition
is fulfilled, so we should branch out when it's not true.

The code today branches out when the condition is true, clearly
reversing the logic. Fix it up by negating the condition.

Signed-off-by: Alexander Graf 
---
 target-s390x/translate.c | 4 
 1 file changed, 4 insertions(+)

diff --git a/target-s390x/translate.c b/target-s390x/translate.c
index 4f82edd..8ae4912 100644
--- a/target-s390x/translate.c
+++ b/target-s390x/translate.c
@@ -3082,6 +3082,10 @@ static ExitStatus op_soc(DisasContext *s, DisasOps *o)
 
 disas_jcc(s, &c, get_field(s->fields, m3));
 
+/* We want to store when the condition is fulfilled, so branch
+   out when it's not */
+c.cond = tcg_invert_cond(c.cond);
+
 lab = gen_new_label();
 if (c.is_64) {
 tcg_gen_brcond_i64(c.cond, c.u.s64.a, c.u.s64.b, lab);
-- 
1.8.1.4




Re: [Qemu-devel] [PATCH v2 01/14] memory: Define API for MemoryRegionOps to take attrs and return status

2015-04-14 Thread Edgar E. Iglesias
On Mon, Apr 13, 2015 at 02:21:51PM +0100, Peter Maydell wrote:
> Define an API so that devices can register MemoryRegionOps whose read
> and write callback functions are passed an arbitrary pointer to some
> transaction attributes and can return a success-or-failure status code.
> This will allow us to model devices which:
>  * behave differently for ARM Secure/NonSecure memory accesses
>  * behave differently for privileged/unprivileged accesses
>  * may return a transaction failure (causing a guest exception)
>for erroneous accesses
> 
> This patch defines the new API and plumbs the attributes parameter through
> to the memory.c public level functions io_mem_read() and io_mem_write(),
> where it is currently dummied out.
> 
> The success/failure response indication is also propagated out to
> io_mem_read() and io_mem_write(), which retain the old-style
> boolean true-for-error return.
> 
> Signed-off-by: Peter Maydell 


Reviewed-by: Edgar E. Iglesias 



> Acked-by: Paolo Bonzini 
> ---
>  include/exec/memattrs.h |  41 ++
>  include/exec/memory.h   |  22 +
>  memory.c| 207 
> 
>  3 files changed, 203 insertions(+), 67 deletions(-)
>  create mode 100644 include/exec/memattrs.h
> 
> diff --git a/include/exec/memattrs.h b/include/exec/memattrs.h
> new file mode 100644
> index 000..1cb3fc0
> --- /dev/null
> +++ b/include/exec/memattrs.h
> @@ -0,0 +1,41 @@
> +/*
> + * Memory transaction attributes
> + *
> + * Copyright (c) 2015 Linaro Limited.
> + *
> + * Authors:
> + *  Peter Maydell 
> + *
> + * This work is licensed under the terms of the GNU GPL, version 2 or later.
> + * See the COPYING file in the top-level directory.
> + *
> + */
> +
> +#ifndef MEMATTRS_H
> +#define MEMATTRS_H
> +
> +/* Every memory transaction has associated with it a set of
> + * attributes. Some of these are generic (such as the ID of
> + * the bus master); some are specific to a particular kind of
> + * bus (such as the ARM Secure/NonSecure bit). We define them
> + * all as non-overlapping bitfields in a single struct to avoid
> + * confusion if different parts of QEMU used the same bit for
> + * different semantics.
> + */
> +typedef struct MemTxAttrs {
> +/* Bus masters which don't specify any attributes will get this
> + * (via the MEMTXATTRS_UNSPECIFIED constant), so that we can
> + * distinguish "all attributes deliberately clear" from
> + * "didn't specify" if necessary.
> + */
> +unsigned int unspecified:1;
> +} MemTxAttrs;
> +
> +/* Bus masters which don't specify any attributes will get this,
> + * which has all attribute bits clear except the topmost one
> + * (so that we can distinguish "all attributes deliberately clear"
> + * from "didn't specify" if necessary).
> + */
> +#define MEMTXATTRS_UNSPECIFIED ((MemTxAttrs) { .unspecified = 1 })
> +
> +#endif
> diff --git a/include/exec/memory.h b/include/exec/memory.h
> index 06ffa1d..703d9e5 100644
> --- a/include/exec/memory.h
> +++ b/include/exec/memory.h
> @@ -28,6 +28,7 @@
>  #ifndef CONFIG_USER_ONLY
>  #include "exec/hwaddr.h"
>  #endif
> +#include "exec/memattrs.h"
>  #include "qemu/queue.h"
>  #include "qemu/int128.h"
>  #include "qemu/notify.h"
> @@ -68,6 +69,16 @@ struct IOMMUTLBEntry {
>  IOMMUAccessFlags perm;
>  };
>  
> +/* New-style MMIO accessors can indicate that the transaction failed.
> + * A zero (MEMTX_OK) response means success; anything else is a failure
> + * of some kind. The memory subsystem will bitwise-OR together results
> + * if it is synthesizing an operation from multiple smaller accesses.
> + */
> +#define MEMTX_OK 0
> +#define MEMTX_ERROR (1U << 0) /* device returned an error */
> +#define MEMTX_DECODE_ERROR  (1U << 1) /* nothing at that address */
> +typedef uint32_t MemTxResult;
> +
>  /*
>   * Memory region callbacks
>   */
> @@ -84,6 +95,17 @@ struct MemoryRegionOps {
>uint64_t data,
>unsigned size);
>  
> +MemTxResult (*read_with_attrs)(void *opaque,
> +   hwaddr addr,
> +   uint64_t *data,
> +   unsigned size,
> +   MemTxAttrs attrs);
> +MemTxResult (*write_with_attrs)(void *opaque,
> +hwaddr addr,
> +uint64_t data,
> +unsigned size,
> +MemTxAttrs attrs);
> +
>  enum device_endian endianness;
>  /* Guest-visible constraints: */
>  struct {
> diff --git a/memory.c b/memory.c
> index ee3f2a8..9bb5674 100644
> --- a/memory.c
> +++ b/memory.c
> @@ -368,57 +368,84 @@ static void adjust_endianness(MemoryRegion *mr, 
> uint64_t *data, unsigned size)
>  }
>  }
>  
> -static void memory_region_oldmmio_read_accessor(MemoryRegion *mr,
> +static MemTxResult memory_region_oldmmio_read_accessor

Re: [Qemu-devel] [Question]Support of China loogson processor

2015-04-14 Thread Rob Landley
On Mon, Apr 13, 2015 at 6:29 AM, vt  wrote:
> Hi, guys
>
> I saw the architecture code about mips in the qemu and kvm modules, so it is
> no doubt that mips cpu can be supported.

It looks like the 32 bit one should work fine. I haven't played with
64 bit yet but there's some support for it in the tree, give it a try?

  http://en.wikipedia.org/wiki/Loongson

Heh. The background on the "4 patented instructions" mentioned above
is mips' lawsuit against Lexra many years ago:

  http://landley.net/notes-2009.html#14-12-2009

If you were wondering why mips had a lost decade where most of its
customers switched over to arm, convincing the world you're a patent
troll will do that. But it's been well over a decade and most people
seem to have forgotten now. And china never cared about US
intellectual property infighting anyway...

> But I wonder if anyone have used qemu/kvm virtualization with China loongson
> processor (MIPS architecture) without modification of qemu/kvm code.
> All the infomation I have searched in the Internet can't answer my question.

I have a mips r4k system emulation working fine at:

  http://landley.net/aboriginal/bin/system-image-mips.tar.gz

(That's based off of linux 3.18 I think, I have 3.19 building locally,
4.0 is on the todo list.)

I haven't tried 64 bit yet but:

$ qemu-system-mips64 -cpu ? | grep Loongson
MIPS 'Loongson-2E'
MIPS 'Loongson-2F'

It's apparently there...

Rob



Re: [Qemu-devel] [PATCH] block/iscsi: do not forget to logout from target

2015-04-14 Thread ronnie sahlberg
Reviewed-By: Ronnie Sahlberg 

On Tue, Apr 14, 2015 at 1:37 AM, Peter Lieven  wrote:
> We actually were always impolitely dropping the connection and
> not cleanly logging out.
>
> Cc: qemu-sta...@nongnu.org
> Signed-off-by: Peter Lieven 
> ---
>  block/iscsi.c | 6 ++
>  1 file changed, 6 insertions(+)
>
> diff --git a/block/iscsi.c b/block/iscsi.c
> index ab20e4d..0b6d3dd 100644
> --- a/block/iscsi.c
> +++ b/block/iscsi.c
> @@ -1503,6 +1503,9 @@ out:
>
>  if (ret) {
>  if (iscsi != NULL) {
> +if (iscsi_is_logged_in(iscsi)) {
> +iscsi_logout_sync(iscsi);
> +}
>  iscsi_destroy_context(iscsi);
>  }
>  memset(iscsilun, 0, sizeof(IscsiLun));
> @@ -1516,6 +1519,9 @@ static void iscsi_close(BlockDriverState *bs)
>  struct iscsi_context *iscsi = iscsilun->iscsi;
>
>  iscsi_detach_aio_context(bs);
> +if (iscsi_is_logged_in(iscsi)) {
> +iscsi_logout_sync(iscsi);
> +}
>  iscsi_destroy_context(iscsi);
>  g_free(iscsilun->zeroblock);
>  g_free(iscsilun->allocationmap);
> --
> 1.9.1
>



Re: [Qemu-devel] [V9fs-developer] [Bug 1336794] Re: 9pfs does not honor open file handles on unlinked files

2015-04-14 Thread Al Viro
On Tue, Apr 14, 2015 at 04:19:41PM +, Eric Van Hensbergen wrote:
> That patch looks fine by me.  Happy to put it in the queue.  Thanks Al.

OK...  Here's one more:

9p: don't bother with __getname() in ->follow_link()

We copy there a kmalloc'ed string and proceed to kfree that string immediately
after that.  Easier to just feed that string to nd_set_link() and _not_
kfree it until ->put_link() (which becomes kfree_put_link() in that case).

Signed-off-by: Al Viro 
---
diff --git a/fs/9p/v9fs.h b/fs/9p/v9fs.h
index 099c771..48d35d8 100644
--- a/fs/9p/v9fs.h
+++ b/fs/9p/v9fs.h
@@ -150,8 +150,6 @@ extern int v9fs_vfs_unlink(struct inode *i, struct dentry 
*d);
 extern int v9fs_vfs_rmdir(struct inode *i, struct dentry *d);
 extern int v9fs_vfs_rename(struct inode *old_dir, struct dentry *old_dentry,
struct inode *new_dir, struct dentry *new_dentry);
-extern void v9fs_vfs_put_link(struct dentry *dentry, struct nameidata *nd,
-   void *p);
 extern struct inode *v9fs_inode_from_fid(struct v9fs_session_info *v9ses,
 struct p9_fid *fid,
 struct super_block *sb, int new);
diff --git a/fs/9p/vfs_inode.c b/fs/9p/vfs_inode.c
index cda68f7..0ba1171 100644
--- a/fs/9p/vfs_inode.c
+++ b/fs/9p/vfs_inode.c
@@ -1224,103 +1224,46 @@ ino_t v9fs_qid2ino(struct p9_qid *qid)
 }
 
 /**
- * v9fs_readlink - read a symlink's location (internal version)
+ * v9fs_vfs_follow_link - follow a symlink path
  * @dentry: dentry for symlink
- * @buffer: buffer to load symlink location into
- * @buflen: length of buffer
+ * @nd: nameidata
  *
  */
 
-static int v9fs_readlink(struct dentry *dentry, char *buffer, int buflen)
+static void *v9fs_vfs_follow_link(struct dentry *dentry, struct nameidata *nd)
 {
-   int retval;
-
-   struct v9fs_session_info *v9ses;
-   struct p9_fid *fid;
+   struct v9fs_session_info *v9ses = v9fs_dentry2v9ses(dentry);
+   struct p9_fid *fid = v9fs_fid_lookup(dentry);
struct p9_wstat *st;
 
-   p9_debug(P9_DEBUG_VFS, " %pd\n", dentry);
-   retval = -EPERM;
-   v9ses = v9fs_dentry2v9ses(dentry);
-   fid = v9fs_fid_lookup(dentry);
+   p9_debug(P9_DEBUG_VFS, "%pd\n", dentry);
+
if (IS_ERR(fid))
-   return PTR_ERR(fid);
+   return ERR_CAST(fid);
 
if (!v9fs_proto_dotu(v9ses))
-   return -EBADF;
+   return ERR_PTR(-EBADF);
 
st = p9_client_stat(fid);
if (IS_ERR(st))
-   return PTR_ERR(st);
+   return ERR_CAST(st);
 
if (!(st->mode & P9_DMSYMLINK)) {
-   retval = -EINVAL;
-   goto done;
+   p9stat_free(st);
+   kfree(st);
+   return ERR_PTR(-EINVAL);
}
+   if (strlen(st->extension) >= PATH_MAX)
+   st->extension[PATH_MAX - 1] = '\0';
 
-   /* copy extension buffer into buffer */
-   retval = min(strlen(st->extension)+1, (size_t)buflen);
-   memcpy(buffer, st->extension, retval);
-
-   p9_debug(P9_DEBUG_VFS, "%pd -> %s (%.*s)\n",
-dentry, st->extension, buflen, buffer);
-
-done:
+   nd_set_link(nd, st->extension);
+   st->extension = NULL;
p9stat_free(st);
kfree(st);
-   return retval;
-}
-
-/**
- * v9fs_vfs_follow_link - follow a symlink path
- * @dentry: dentry for symlink
- * @nd: nameidata
- *
- */
-
-static void *v9fs_vfs_follow_link(struct dentry *dentry, struct nameidata *nd)
-{
-   int len = 0;
-   char *link = __getname();
-
-   p9_debug(P9_DEBUG_VFS, "%pd\n", dentry);
-
-   if (!link)
-   link = ERR_PTR(-ENOMEM);
-   else {
-   len = v9fs_readlink(dentry, link, PATH_MAX);
-
-   if (len < 0) {
-   __putname(link);
-   link = ERR_PTR(len);
-   } else
-   link[min(len, PATH_MAX-1)] = 0;
-   }
-   nd_set_link(nd, link);
-
return NULL;
 }
 
 /**
- * v9fs_vfs_put_link - release a symlink path
- * @dentry: dentry for symlink
- * @nd: nameidata
- * @p: unused
- *
- */
-
-void
-v9fs_vfs_put_link(struct dentry *dentry, struct nameidata *nd, void *p)
-{
-   char *s = nd_get_link(nd);
-
-   p9_debug(P9_DEBUG_VFS, " %pd %s\n",
-dentry, IS_ERR(s) ? "" : s);
-   if (!IS_ERR(s))
-   __putname(s);
-}
-
-/**
  * v9fs_vfs_mkspecial - create a special file
  * @dir: inode to create special file in
  * @dentry: dentry to create
@@ -1514,7 +1457,7 @@ static const struct inode_operations 
v9fs_file_inode_operations = {
 static const struct inode_operations v9fs_symlink_inode_operations = {
.readlink = generic_readlink,
.follow_link = v9fs_vfs_follow_link,
-   .put_link = v9fs_vfs_put_link,
+   .put_link = kfree_put_link,
.getattr = v9fs_vfs_getattr,
.setattr = v9fs_vfs_setattr,
 };
di

Re: [Qemu-devel] [PATCH v2 16/16] hw/intc/arm_gic: add gic_update() for grouping

2015-04-14 Thread Peter Maydell
On 30 October 2014 at 22:12, Greg Bellows  wrote:
> From: Fabian Aggeler 
>
> GICs with grouping (GICv2 or GICv1 with Security Extensions) have a
> different exception generation model which is more complicated than
> without interrupt grouping. We add a new function to handle this model.
>
> Signed-off-by: Fabian Aggeler 
>
> ---
>
> v1 -> v2
> - Fix issue in gic_update_with_grouping() using the wrong combination of
>   flag and CPU control bank for checking if group 1 interrupts are enabled.
> ---
>  hw/intc/arm_gic.c  | 87 
> +-
>  hw/intc/gic_internal.h |  1 +
>  2 files changed, 87 insertions(+), 1 deletion(-)
>
> diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
> index 808aa18..e33c470 100644
> --- a/hw/intc/arm_gic.c
> +++ b/hw/intc/arm_gic.c
> @@ -52,6 +52,87 @@ static inline bool ns_access(void)
>  return true;
>  }
>
> +inline void gic_update_with_grouping(GICState *s)
> +{
> +int best_irq;
> +int best_prio;
> +int irq;
> +int irq_level;
> +int fiq_level;
> +int cpu;
> +int cm;
> +bool next_int;
> +bool next_grp0;
> +bool gicc_grp0_enabled;
> +bool gicc_grp1_enabled;
> +
> +for (cpu = 0; cpu < NUM_CPU(s); cpu++) {
> +cm = 1 << cpu;
> +gicc_grp0_enabled = s->cpu_control[cpu][0] & GICC_CTLR_S_EN_GRP0;
> +gicc_grp1_enabled = s->cpu_control[cpu][1] & GICC_CTLR_NS_EN_GRP1;
> +next_int = 0;
> +next_grp0 = 0;
> +
> +s->current_pending[cpu] = 1023;
> +if ((!s->enabled_grp[0] && !s->enabled_grp[1])
> +|| (!gicc_grp0_enabled && !gicc_grp1_enabled)) {
> +qemu_irq_lower(s->parent_irq[cpu]);
> +qemu_irq_lower(s->parent_fiq[cpu]);
> +return;
> +}
> +
> +/* Determine highest priority pending interrupt */
> +best_prio = 0x100;
> +best_irq = 1023;
> +for (irq = 0; irq < s->num_irq; irq++) {
> +if (GIC_TEST_ENABLED(irq, cm) && gic_test_pending(s, irq, cm)) {
> +if (GIC_GET_PRIORITY(irq, cpu) < best_prio) {
> +best_prio = GIC_GET_PRIORITY(irq, cpu);
> +best_irq = irq;
> +}
> +}
> +}
> +
> +/* Priority of IRQ higher than priority mask? */
> +if (best_prio < s->priority_mask[cpu]) {
> +s->current_pending[cpu] = best_irq;
> +if (GIC_TEST_GROUP0(best_irq, cm) && s->enabled_grp[0]) {
> +/* TODO: Add subpriority handling (binary point register) */
> +if (best_prio < s->running_priority[cpu]) {
> +next_int = true;
> +next_grp0 = true;
> +}
> +} else if (!GIC_TEST_GROUP0(best_irq, cm) && s->enabled_grp[1]) {
> +/* TODO: Add subpriority handling (binary point register) */
> +if (best_prio < s->running_priority[cpu]) {
> +next_int = true;
> +next_grp0 = false;
> +}
> +}
> +}
> +
> +fiq_level = 0;
> +irq_level = 0;
> +if (next_int) {
> +if (next_grp0 && (s->cpu_control[cpu][0] & GICC_CTLR_S_FIQ_EN)) {
> +if (gicc_grp0_enabled) {
> +fiq_level = 1;
> +DPRINTF("Raised pending FIQ %d (cpu %d)\n", best_irq, 
> cpu);
> +}
> +} else {
> +if ((next_grp0 && gicc_grp0_enabled)
> + || (!next_grp0 && gicc_grp1_enabled)) {
> +irq_level = 1;
> +DPRINTF("Raised pending IRQ %d (cpu %d)\n", best_irq, 
> cpu);
> +}
> +}
> +}
> +/* Set IRQ/FIQ signal */
> +qemu_set_irq(s->parent_irq[cpu], irq_level);
> +qemu_set_irq(s->parent_fiq[cpu], fiq_level);
> +}
> +}

I'm not 100% convinced of the benefit of splitting out the
"no grouping" and "grouping" code paths (for instance it means
this function doesn't have the bugfix from commit b52b81e44f7
to honour the cpu-target-mask). I'll see how I feel when I
get to this patch in rework :-)

>  inline void gic_update_no_grouping(GICState *s)
>  {
>  int best_irq;
> @@ -95,7 +176,11 @@ inline void gic_update_no_grouping(GICState *s)
>  /* Update interrupt status after enabled or pending bits have been changed.  
> */
>  void gic_update(GICState *s)
>  {
> -gic_update_no_grouping(s);
> +if (s->revision >= 2 || s->security_extn) {
> +gic_update_with_grouping(s);
> +} else {
> +gic_update_no_grouping(s);
> +}
>  }
>
>  void gic_set_pending_private(GICState *s, int cpu, int irq)
> diff --git a/hw/intc/gic_internal.h b/hw/intc/gic_internal.h
> index e16a7e5..01859ed 100644
> --- a/hw/intc/gic_internal.h
> +++ b/hw/intc/gic_internal.h
> @@ -73,6 +73,7 @@
>  void gic_set_pending_private(GICState *s, int cpu, int irq);
>  uint32_t 

Re: [Qemu-devel] [PATCH v2 15/16] hw/intc/arm_gic: Break out gic_update() function

2015-04-14 Thread Peter Maydell
On 30 October 2014 at 22:12, Greg Bellows  wrote:
> From: Fabian Aggeler 
>
> Prepare to split gic_update() in two functions, one for GICs with
> interrupt grouping and one without grouping (existing).
>
> Signed-off-by: Fabian Aggeler 
> ---
>  hw/intc/arm_gic.c  | 11 ---
>  hw/intc/gic_internal.h |  1 +
>  2 files changed, 9 insertions(+), 3 deletions(-)
>
> diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
> index e01cfdc..808aa18 100644
> --- a/hw/intc/arm_gic.c
> +++ b/hw/intc/arm_gic.c
> @@ -52,9 +52,7 @@ static inline bool ns_access(void)
>  return true;
>  }
>
> -/* TODO: Many places that call this routine could be optimized.  */
> -/* Update interrupt status after enabled or pending bits have been changed.  
> */
> -void gic_update(GICState *s)
> +inline void gic_update_no_grouping(GICState *s)
>  {
>  int best_irq;
>  int best_prio;
> @@ -93,6 +91,13 @@ void gic_update(GICState *s)
>  }
>  }
>
> +/* TODO: Many places that call this routine could be optimized.  */
> +/* Update interrupt status after enabled or pending bits have been changed.  
> */
> +void gic_update(GICState *s)
> +{
> +gic_update_no_grouping(s);
> +}
> +
>  void gic_set_pending_private(GICState *s, int cpu, int irq)
>  {
>  int cm = 1 << cpu;
> diff --git a/hw/intc/gic_internal.h b/hw/intc/gic_internal.h
> index 13fe5a6..e16a7e5 100644
> --- a/hw/intc/gic_internal.h
> +++ b/hw/intc/gic_internal.h
> @@ -73,6 +73,7 @@
>  void gic_set_pending_private(GICState *s, int cpu, int irq);
>  uint32_t gic_acknowledge_irq(GICState *s, int cpu);
>  void gic_complete_irq(GICState *s, int cpu, int irq);
> +inline void gic_update_no_grouping(GICState *s);

This should probably be 'static inline' and doesn't need
a prototype in the header file.

-- PMM



Re: [Qemu-devel] [PATCH v2 11/16] hw/intc/arm_gic: Handle grouping for GICC_HPPIR

2015-04-14 Thread Peter Maydell
On 30 October 2014 at 22:12, Greg Bellows  wrote:
> From: Fabian Aggeler 
>
> Grouping (GICv2) and Security Extensions change the behaviour of reads
> of the highest priority pending interrupt register (ICCHPIR/GICC_HPPIR).
>
> Signed-off-by: Fabian Aggeler 
> ---
>  hw/intc/arm_gic.c  | 29 -
>  hw/intc/gic_internal.h |  1 +
>  2 files changed, 29 insertions(+), 1 deletion(-)
>
> diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
> index 9b021d7..15fd660 100644
> --- a/hw/intc/arm_gic.c
> +++ b/hw/intc/arm_gic.c
> @@ -329,6 +329,33 @@ uint8_t gic_get_running_priority(GICState *s, int cpu)
>  }
>  }
>
> +uint16_t gic_get_current_pending_irq(GICState *s, int cpu)
> +{
> +bool isGrp0;
> +uint16_t pendingId = s->current_pending[cpu];
> +
> +if (pendingId < GIC_MAXIRQ && (s->revision >= 2 || s->security_extn)) {
> +isGrp0 = GIC_TEST_GROUP0(pendingId, (1 << cpu));
> +if ((isGrp0 && !s->enabled_grp[0])
> +|| (!isGrp0 && !s->enabled_grp[1])) {
> +return 1023;
> +}
> +if (s->security_extn) {
> +if (isGrp0 && ns_access()) {
> +/* Group0 interrupts hidden from Non-secure access */
> +return 1023;
> +}
> +if (!isGrp0 && !ns_access()
> +&& !(s->cpu_control[cpu][0] & GICC_CTLR_S_ACK_CTL)) {
> +/* Group1 interrupts only seen by Secure access if
> + * AckCtl bit set. */
> +return 1022;
> +}
> +}
> +}
> +return pendingId;
> +}

Some coding style nits about var name capitalisation and
multiline comment style, but otherwise OK.

-- PMM



Re: [Qemu-devel] [PATCH v2 14/16] hw/intc/arm_gic: Restrict priority view

2015-04-14 Thread Peter Maydell
On 30 October 2014 at 22:12, Greg Bellows  wrote:
> From: Fabian Aggeler 
>
> GICs with Security Extensions restrict the non-secure view of the
> interrupt priority and priority mask registers.
>
> Signed-off-by: Fabian Aggeler 
> ---
>  hw/intc/arm_gic.c  | 66 
> +-
>  hw/intc/gic_internal.h |  3 +++
>  2 files changed, 63 insertions(+), 6 deletions(-)
>
> diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
> index 7eb72df..e01cfdc 100644
> --- a/hw/intc/arm_gic.c
> +++ b/hw/intc/arm_gic.c
> @@ -258,11 +258,66 @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu)
>
>  void gic_set_priority(GICState *s, int cpu, int irq, uint8_t val)
>  {
> +uint8_t prio = val;
> +
> +if (s->security_extn && ns_access()) {
> +if (GIC_TEST_GROUP0(irq, (1 << cpu))) {
> +return; /* Ignore Non-secure access of Group0 IRQ */
> +}
> +prio = 0x80 | (prio >> 1); /* Non-secure view */
> +}
> +
>  if (irq < GIC_INTERNAL) {
> -s->priority1[irq][cpu] = val;
> +s->priority1[irq][cpu] = prio;
>  } else {
> -s->priority2[(irq) - GIC_INTERNAL] = val;
> +s->priority2[(irq) - GIC_INTERNAL] = prio;
> +}
> +}
> +
> +uint32_t gic_get_priority(GICState *s, int cpu, int irq)
> +{
> +uint32_t prio = GIC_GET_PRIORITY(irq, cpu);
> +
> +if (s->security_extn && ns_access()) {
> +if (GIC_TEST_GROUP0(irq, (1 << cpu))) {
> +return 0; /* Non-secure access cannot read priority of Group0 
> IRQ */
> +}
> +prio = (prio << 1); /* Non-secure view */
>  }
> +return prio;
> +}
> +
> +void gic_set_priority_mask(GICState *s, int cpu, uint8_t val)
> +{
> +uint8_t pmask = (val & 0xff);
> +
> +if (s->security_extn && ns_access()) {
> +if (s->priority_mask[cpu] & 0x80) {
> +/* Priority Mask in upper half */
> +pmask = 0x80 | (pmask >> 1);
> +} else {
> +/* Non-secure write ignored if priority mask is in lower half */
> +return;
> +}
> +}
> +s->priority_mask[cpu] = pmask;
> +}
> +
> +uint32_t gic_get_priority_mask(GICState *s, int cpu)
> +{
> +uint32_t pmask = s->priority_mask[cpu];
> +
> +if (s->security_extn && ns_access()) {
> +if (pmask & 0x80) {
> +/* Priority Mask in upper half, return Non-secure view */
> +pmask = (pmask << 1);
> +} else {
> +/* Priority Mask in lower half, RAZ */
> +pmask = 0;
> +}
> +}
> +return pmask;
> +
>  }
>
>  uint32_t gic_get_cpu_control(GICState *s, int cpu)
> @@ -556,7 +611,7 @@ static uint32_t gic_dist_readb(void *opaque, hwaddr 
> offset)
>  irq = (offset - 0x400) + GIC_BASE_IRQ;
>  if (irq >= s->num_irq)
>  goto bad_reg;
> -res = GIC_GET_PRIORITY(irq, cpu);
> +res = gic_get_priority(s, cpu, irq);
>  } else if (offset < 0xc00) {
>  /* Interrupt CPU Target.  */
>  if (s->num_cpu == 1 && s->revision != REV_11MPCORE) {
> @@ -920,7 +975,7 @@ static uint32_t gic_cpu_read(GICState *s, int cpu, int 
> offset)
>  case 0x00: /* Control */
>  return gic_get_cpu_control(s, cpu);
>  case 0x04: /* Priority mask */
> -return s->priority_mask[cpu];
> +return gic_get_priority_mask(s, cpu);
>  case 0x08: /* Binary Point */
>  if (s->security_extn && ns_access()) {
>  /* BPR is banked. Non-secure copy stored in ABPR. */
> @@ -958,8 +1013,7 @@ static void gic_cpu_write(GICState *s, int cpu, int 
> offset, uint32_t value)
>  case 0x00: /* Control */
>  return gic_set_cpu_control(s, cpu, value);
>  case 0x04: /* Priority mask */
> -s->priority_mask[cpu] = (value & 0xff);
> -break;
> +return gic_set_priority_mask(s, cpu, value);

'return some_function_returning_void()' again.

>  case 0x08: /* Binary Point */
>  if (s->security_extn && ns_access()) {
>  /* BPR is banked. Non-secure copy stored in ABPR. */
> diff --git a/hw/intc/gic_internal.h b/hw/intc/gic_internal.h
> index fbb1f66..13fe5a6 100644
> --- a/hw/intc/gic_internal.h
> +++ b/hw/intc/gic_internal.h
> @@ -76,6 +76,9 @@ void gic_complete_irq(GICState *s, int cpu, int irq);
>  void gic_update(GICState *s);
>  void gic_init_irqs_and_distributor(GICState *s);
>  void gic_set_priority(GICState *s, int cpu, int irq, uint8_t val);
> +uint32_t gic_get_priority(GICState *s, int cpu, int irq);
> +void gic_set_priority_mask(GICState *s, int cpu, uint8_t val);
> +uint32_t gic_get_priority_mask(GICState *s, int cpu);
>  uint32_t gic_get_cpu_control(GICState *s, int cpu);
>  void gic_set_cpu_control(GICState *s, int cpu, uint32_t value);
>  uint8_t gic_get_running_priority(GICState *s, int cpu);
> --
> 1.8.3.2
>

-- PMM



Re: [Qemu-devel] [PATCH v2 13/16] hw/intc/arm_gic: Change behavior of IAR writes

2015-04-14 Thread Peter Maydell
On 30 October 2014 at 22:12, Greg Bellows  wrote:
> From: Fabian Aggeler 
>
> Grouping (GICv2) and Security Extensions change the behavior of IAR
> reads. Acknowledging Group0 interrupts is only allowed from Secure
> state and acknowledging Group1 interrupts from Secure state is only
> allowed if AckCtl bit is set.

Subject says "IAR writes" but it means "IAR reads".

>
> Signed-off-by: Fabian Aggeler 
>
> ---
>
> v1 -> v2
> - Fix issue in gic_acknowledge_irq() where the GICC_CTLR_S_ACK_CTL flag is
>   applied without first checking whether the read is secure or non-secure.
>   Secure reads of IAR when AckCtl is 0 return a spurious ID of 1022, but
>   non-secure ignores the flag.
> ---
>  hw/intc/arm_gic.c | 25 +
>  1 file changed, 25 insertions(+)
>
> diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
> index 2d83225..7eb72df 100644
> --- a/hw/intc/arm_gic.c
> +++ b/hw/intc/arm_gic.c
> @@ -190,11 +190,36 @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu)
>  int ret, irq, src;
>  int cm = 1 << cpu;
>  irq = s->current_pending[cpu];
> +bool isGrp0;
>  if (irq == 1023
>  || GIC_GET_PRIORITY(irq, cpu) >= s->running_priority[cpu]) {
>  DPRINTF("ACK no pending IRQ\n");
>  return 1023;
>  }
> +
> +if (s->revision >= 2 || s->security_extn) {
> +isGrp0 = GIC_TEST_GROUP0(irq, (1 << cpu));
> +if ((isGrp0 && (!s->enabled_grp[0]
> +|| !(s->cpu_control[cpu][0] & GICC_CTLR_S_EN_GRP0)))
> +   || (!isGrp0 && (!s->enabled_grp[1]
> +|| !(s->cpu_control[cpu][1] & GICC_CTLR_NS_EN_GRP1 {
> +return 1023;
> +}
> +
> +if ((s->revision >= 2 && !s->security_extn)
> +|| (s->security_extn && !ns_access())) {
> +if (!isGrp0 && !ns_access() &&
> +!(s->cpu_control[cpu][0] & GICC_CTLR_S_ACK_CTL)) {
> +DPRINTF("Read of IAR ignored for Group1 interrupt %d "
> +"(AckCtl disabled)\n", irq);
> +return 1022;
> +}
> +} else if (s->security_extn && ns_access() && isGrp0) {
> +DPRINTF("Non-secure read of IAR ignored for Group0 interrupt 
> %d\n",
> +irq);
> +return 1023;
> +}
> +}

This doesn't quite line up with the pseudocode in the GIC spec.
It's probably going to be easier to read with some utility functions
for 'grouping enabled' etc.

>  s->last_active[irq][cpu] = s->running_irq[cpu];
>
>  if (s->revision == REV_11MPCORE || s->revision == REV_NVIC) {
> --
> 1.8.3.2
>

-- PMM



Re: [Qemu-devel] [PATCH v2 12/16] hw/intc/arm_gic: Change behavior of EOIR writes

2015-04-14 Thread Peter Maydell
On 30 October 2014 at 22:12, Greg Bellows  wrote:
> From: Fabian Aggeler 
>
> Grouping (GICv2) and Security Extensions change the behavior of EOIR
> writes. Completing Group0 interrupts is only allowed from Secure state
> and completing Group1 interrupts from Secure state is only allowed if
> AckCtl bit is set.
>
> Signed-off-by: Fabian Aggeler 
>
> ---
>
> v1 -> v2
> - Fix issue with EOIR writes involving AckCtl.  AckCtl is ignored on EOIR
>   group 1 interrupts when non-secure.  Group 1 interrupts are only ignored 
> when
>   secure and AckCTl is clear.
> ---
>  hw/intc/arm_gic.c | 15 +++
>  1 file changed, 15 insertions(+)
>
> diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
> index 15fd660..2d83225 100644
> --- a/hw/intc/arm_gic.c
> +++ b/hw/intc/arm_gic.c
> @@ -384,6 +384,21 @@ void gic_complete_irq(GICState *s, int cpu, int irq)
>  GIC_SET_PENDING(irq, cm);
>  update = 1;
>  }
> +} else if ((s->revision >= 2 && !s->security_extn)
> + || (s->security_extn && !ns_access())) {
> +/* Handle GICv2 without Security Extensions or GIC with Security
> + * Extensions and a secure write.
> + */
> +if (!GIC_TEST_GROUP0(irq, cm) && !ns_access()
> +&& !(s->cpu_control[cpu][0] & GICC_CTLR_S_ACK_CTL)) {
> +/* Unpredictable. We choose to ignore. */
> +DPRINTF("EOI for Group1 interrupt %d ignored "
> +"(AckCtl disabled)\n", irq);
> +return;
> +}

For GICv2 without the security extns, EOIR accesses should behave
as if they were secure, so the call to ns_access() inside this
conditional is wrong. We probably need to disentangle the v1-vs-v2
differences here.

> +} else if (s->security_extn && ns_access() && GIC_TEST_GROUP0(irq, cm)) {
> +DPRINTF("Non-secure EOI for Group0 interrupt %d ignored\n", irq);
> +return;
>  }

-- PMM



Re: [Qemu-devel] [PATCH v2 10/16] hw/intc/arm_gic: Implement Non-secure view of RPR

2015-04-14 Thread Peter Maydell
On 30 October 2014 at 22:12, Greg Bellows  wrote:
> From: Fabian Aggeler 
>
> For GICs with Security Extensions Non-secure reads have a restricted
> view on the current running priority.
>
> Signed-off-by: Fabian Aggeler 
> ---
>  hw/intc/arm_gic.c  | 17 -
>  hw/intc/gic_internal.h |  1 +
>  2 files changed, 17 insertions(+), 1 deletion(-)
>
> diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
> index 3761d12..9b021d7 100644
> --- a/hw/intc/arm_gic.c
> +++ b/hw/intc/arm_gic.c
> @@ -314,6 +314,21 @@ void gic_set_cpu_control(GICState *s, int cpu, uint32_t 
> value)
>  }
>  }
>
> +uint8_t gic_get_running_priority(GICState *s, int cpu)
> +{
> +if (s->security_extn && ns_access()) {
> +if (s->running_priority[cpu] & 0x80) {
> +/* Running priority in upper half, return Non-secure view */
> +return s->running_priority[cpu] << 1;
> +} else {
> +/* Running priority in lower half, RAZ */
> +return 0;
> +}
> +} else {
> +return s->running_priority[cpu];
> +}
> +}
> +
>  void gic_complete_irq(GICState *s, int cpu, int irq)
>  {
>  int update = 0;
> @@ -849,7 +864,7 @@ static uint32_t gic_cpu_read(GICState *s, int cpu, int 
> offset)
>  case 0x0c: /* Acknowledge */
>  return gic_acknowledge_irq(s, cpu);
>  case 0x14: /* Running Priority */
> -return s->running_priority[cpu];
> +return gic_get_running_priority(s, cpu);
>  case 0x18: /* Highest Pending Interrupt */
>  return s->current_pending[cpu];
>  case 0x1c: /* Aliased Binary Point */
> diff --git a/hw/intc/gic_internal.h b/hw/intc/gic_internal.h
> index e360de6..821ce16 100644
> --- a/hw/intc/gic_internal.h
> +++ b/hw/intc/gic_internal.h
> @@ -78,6 +78,7 @@ void gic_init_irqs_and_distributor(GICState *s);
>  void gic_set_priority(GICState *s, int cpu, int irq, uint8_t val);
>  uint32_t gic_get_cpu_control(GICState *s, int cpu);
>  void gic_set_cpu_control(GICState *s, int cpu, uint32_t value);
> +uint8_t gic_get_running_priority(GICState *s, int cpu);

I think this patch should be combined with patch 14 (which
deals with the other half of the priority register changes.)

-- PMM



Re: [Qemu-devel] [PATCH v2 09/16] hw/intc/arm_gic: Make ICCBPR/GICC_BPR banked

2015-04-14 Thread Peter Maydell
On 30 October 2014 at 22:12, Greg Bellows  wrote:
> From: Fabian Aggeler 
>
> This register is banked in GICs with Security Extensions. Storing the
> non-secure copy of BPR in the abpr, which is an alias to the non-secure
> copy for secure access. ABPR itself is only accessible from secure state
> if the GIC implements Security Extensions.
>
> Signed-off-by: Fabian Aggeler 
>
> ---
>
> v1 -> v2
> - Fix ABPR read handling when security extensions are not present
> - Fix BPR write to take into consideration the minimum value written to ABPR
>   and restrict BPR->ABPR mirroring to GICv2 and up.
> - Fix ABPR write to take into consideration the minumum value written
> - Fix ABPR write condition break-down to include mirroring of ABPR writes to
>   BPR.
> ---
>  hw/intc/arm_gic.c| 54 
> 
>  include/hw/intc/arm_gic_common.h | 11 +---
>  2 files changed, 57 insertions(+), 8 deletions(-)
>
> diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
> index 3c0414f..3761d12 100644
> --- a/hw/intc/arm_gic.c
> +++ b/hw/intc/arm_gic.c
> @@ -840,7 +840,12 @@ static uint32_t gic_cpu_read(GICState *s, int cpu, int 
> offset)
>  case 0x04: /* Priority mask */
>  return s->priority_mask[cpu];
>  case 0x08: /* Binary Point */
> -return s->bpr[cpu];
> +if (s->security_extn && ns_access()) {
> +/* BPR is banked. Non-secure copy stored in ABPR. */
> +return s->abpr[cpu];
> +} else {
> +return s->bpr[cpu];
> +}
>  case 0x0c: /* Acknowledge */
>  return gic_acknowledge_irq(s, cpu);
>  case 0x14: /* Running Priority */
> @@ -848,7 +853,14 @@ static uint32_t gic_cpu_read(GICState *s, int cpu, int 
> offset)
>  case 0x18: /* Highest Pending Interrupt */
>  return s->current_pending[cpu];
>  case 0x1c: /* Aliased Binary Point */
> -return s->abpr[cpu];
> +if (!s->security_extn || (s->security_extn && ns_access())) {
> +/* If Security Extensions are present ABPR is a secure register,
> + * only accessible from secure state.
> + */
> +return 0;
> +} else {
> +return s->abpr[cpu];
> +}
>  case 0xd0: case 0xd4: case 0xd8: case 0xdc:
>  return s->apr[(offset - 0xd0) / 4][cpu];
>  default:
> @@ -867,13 +879,45 @@ static void gic_cpu_write(GICState *s, int cpu, int 
> offset, uint32_t value)
>  s->priority_mask[cpu] = (value & 0xff);
>  break;
>  case 0x08: /* Binary Point */
> -s->bpr[cpu] = (value & 0x7);
> +if (s->security_extn && ns_access()) {
> +/* BPR is banked. Non-secure copy stored in ABPR. */
> +/* The non-secure (ABPR) must not be below an implementation
> + * defined minimum value between 1-4.
> + * NOTE: BPR_MIN is currently set to 0, which is always true 
> given
> + *   the value is unsigned, so no check is necessary.
> + */
> +s->abpr[cpu] = (GIC_MIN_ABPR <= (value & 0x7))
> +? (value & 0x7) : GIC_MIN_ABPR;
> +} else {
> +s->bpr[cpu] = (value & 0x7);
> +if (s->revision >= 2) {
> +/* On GICv2 without sec ext, GICC_ABPR is an alias of 
> GICC_BPR
> + * so mirror the write.
> + */
> + s->abpr[cpu] = s->bpr[cpu];

My reading of the spec says that GICv2 without Security extensions
should not alias these two registers.

-- PMM



Re: [Qemu-devel] [PATCH v2 08/16] hw/intc/arm_gic: Make ICCICR/GICC_CTLR banked

2015-04-14 Thread Peter Maydell
On 30 October 2014 at 22:12, Greg Bellows  wrote:
> From: Fabian Aggeler 
>
> ICCICR/GICC_CTLR is banked in GICv1 implementations with Security
> Extensions or in GICv2 in independent from Security Extensions.
> This makes it possible to enable forwarding of interrupts from
> the CPU interfaces to the connected processors for Group0 and Group1.
>
> We also allow to set additional bits like AckCtl and FIQEn by changing
> the type from bool to uint32. Since the field does not only store the
> enable bit anymore and since we are touching the vmstate, we use the
> opportunity to rename the field to cpu_control.
>
> Signed-off-by: Fabian Aggeler 
>
> ---
>
> v1 -> v2
> - Rework gic_set_cpu_control() and gic_get_cpu_control() to close gap on
>   handling GICv1 wihtout security extensions.
> - Fix use of incorrect control index in update.
> ---
>  hw/intc/arm_gic.c| 82 
> +---
>  hw/intc/arm_gic_common.c |  5 ++-
>  hw/intc/arm_gic_kvm.c|  8 ++--
>  hw/intc/armv7m_nvic.c|  2 +-
>  hw/intc/gic_internal.h   | 14 +++
>  include/hw/intc/arm_gic_common.h |  2 +-
>  6 files changed, 100 insertions(+), 13 deletions(-)
>
> diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
> index 1db15aa..3c0414f 100644
> --- a/hw/intc/arm_gic.c
> +++ b/hw/intc/arm_gic.c
> @@ -66,7 +66,7 @@ void gic_update(GICState *s)
>  for (cpu = 0; cpu < NUM_CPU(s); cpu++) {
>  cm = 1 << cpu;
>  s->current_pending[cpu] = 1023;
> -if (!s->enabled || !s->cpu_enabled[cpu]) {
> +if (!s->enabled || !(s->cpu_control[cpu][1] & 1)) {
>  qemu_irq_lower(s->parent_irq[cpu]);
>  return;
>  }
> @@ -240,6 +240,80 @@ void gic_set_priority(GICState *s, int cpu, int irq, 
> uint8_t val)
>  }
>  }
>
> +uint32_t gic_get_cpu_control(GICState *s, int cpu)
> +{
> +uint32_t ret;
> +
> +if (!s->security_extn) {
> +if (s->revision == 1) {
> +ret = s->cpu_control[cpu][1];
> +ret &= 0x1; /* Mask of reserved bits */
> +} else {
> +ret = s->cpu_control[cpu][0];
> +ret &= GICC_CTLR_S_MASK;   /* Mask of reserved bits */
> +}
> +} else {
> +if (ns_access()) {
> +ret = s->cpu_control[cpu][1];
> +ret &= GICC_CTLR_NS_MASK;   /* Mask of reserved bits */
> +if (s->revision == 1) {
> +ret &= 0x1; /* Mask of reserved bits */
> +}
> +} else {
> +ret = s->cpu_control[cpu][0];
> +ret &= GICC_CTLR_S_MASK;   /* Mask of reserved bits */
> +}
> +}
> +
> +return ret;
> +}
> +
> +void gic_set_cpu_control(GICState *s, int cpu, uint32_t value)
> +{
> +if (!s->security_extn) {
> +if (s->revision == 1) {
> +s->cpu_control[cpu][1] = value & 0x1;
> +DPRINTF("CPU Interface %d %sabled\n", cpu,
> +s->cpu_control[cpu][1] ? "En" : "Dis");
> +} else {
> +/* Write to Secure instance of the register */
> +s->cpu_control[cpu][0] = value & GICC_CTLR_S_MASK;
> +/* Synchronize EnableGrp1 alias of Non-secure copy */
> +s->cpu_control[cpu][1] &= ~GICC_CTLR_NS_EN_GRP1;
> +s->cpu_control[cpu][1] |=
> +(value & GICC_CTLR_S_EN_GRP1) ? GICC_CTLR_NS_EN_GRP1 : 0;
> +DPRINTF("CPU Interface %d: Group0 Interrupts %sabled, "
> +"Group1 Interrupts %sabled\n", cpu,
> +(s->cpu_control[cpu][0] & GICC_CTLR_S_EN_GRP0) ? "En" : 
> "Dis",
> +(s->cpu_control[cpu][0] & GICC_CTLR_S_EN_GRP1) ? "En" : 
> "Dis");
> +}
> +} else {
> +if (ns_access()) {
> +if (s->revision == 1) {
> +s->cpu_control[cpu][1] = value & 0x1;
> +DPRINTF("CPU Interface %d %sabled\n", cpu,
> +s->cpu_control[cpu][1] ? "En" : "Dis");
> +} else {
> +/* Write to Non-secure instance of the register */
> +s->cpu_control[cpu][1] = value & GICC_CTLR_NS_MASK;
> +/* Synchronize EnableGrp1 alias of Secure copy */
> +s->cpu_control[cpu][0] &= ~GICC_CTLR_S_EN_GRP1;
> +s->cpu_control[cpu][0] |=
> +(value & GICC_CTLR_NS_EN_GRP1) ? GICC_CTLR_S_EN_GRP1 : 0;
> +}
> +DPRINTF("CPU Interface %d: Group1 Interrupts %sabled\n", cpu,
> +(s->cpu_control[cpu][1] & GICC_CTLR_NS_EN_GRP1) ? "En" : 
> "Dis");
> +} else {
> +/* Write to Secure instance of the register */
> +s->cpu_control[cpu][0] = value & GICC_CTLR_S_MASK;
> +/* Synchronize EnableGrp1 alias of Non-secure copy */
> +s->cpu_control[cpu][1] &= ~GICC_CTLR_NS_EN_GRP1;
> +s->cpu_control[cpu][1] |=
> +(value & GICC_CTLR_S_EN_GRP1) ? GICC_CTLR_NS_EN_GRP1 : 0;
> + 

Re: [Qemu-devel] [PATCH v2 00/16] target-arm: Add GICv1/SecExt and GICv2/Grouping

2015-04-14 Thread Peter Maydell
On 30 October 2014 at 22:11, Greg Bellows  wrote:
> This patch series adds ARM GICv1 and GICv2 security extension support.  As a
> result GIC interrupt grouping and FIQ enablement have also been added.  FIQ
> enablement is limited to ARM the ARM vexpress and virt machines.
>
> At the current moment, the security extension capability is not enabled as it
> depends on ARM secure address space support for proper operation.  Instead,
> secure checks are hardwired as non-secure.

Hi Greg -- just noticed you forgot to add your own signed-off-by:
tag to these patches (needed as well as Fabian's since they passed
through your hands). Could you reply to this cover letter giving it,
please?

thanks
-- PMM



Re: [Qemu-devel] [PATCH v2 07/16] hw/intc/arm_gic: Make ICDDCR/GICD_CTLR banked

2015-04-14 Thread Peter Maydell
On 30 October 2014 at 22:12, Greg Bellows  wrote:
> From: Fabian Aggeler 
>
> ICDDCR/GICD_CTLR is banked in GICv1 implementations with Security
> Extensions or in GICv2 in independent from Security Extensions.
> This makes it possible to enable forwarding of interrupts from
> Distributor to the CPU interfaces for Group0 and Group1.
>
> EnableGroup0 (Bit [1]) in GICv1 is IMPDEF. Since this bit (Enable
> Non-secure) is present in the integrated IC of the Cortex-A9 MPCore,
> which implements the GICv1 profile, we support this bit in GICv1 too.
>
> Signed-off-by: Fabian Aggeler 
>
> ---
>
> v1 -> v2
> - Fix gic_dist_writeb() update of GICD_CTRL to only use bit[0] of the
>   EnableGrp1 field not bit[1].
> - Add clarifying comments
> ---
>  hw/intc/arm_gic.c| 49 
> 
>  hw/intc/arm_gic_common.c |  2 +-
>  include/hw/intc/arm_gic_common.h |  7 +-
>  3 files changed, 52 insertions(+), 6 deletions(-)
>
> diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
> index 36ac188..1db15aa 100644
> --- a/hw/intc/arm_gic.c
> +++ b/hw/intc/arm_gic.c
> @@ -302,8 +302,25 @@ static uint32_t gic_dist_readb(void *opaque, hwaddr 
> offset)
>  cpu = gic_get_current_cpu(s);
>  cm = 1 << cpu;
>  if (offset < 0x100) {
> -if (offset == 0)
> -return s->enabled;
> +if (offset == 0) {  /* GICD_CTLR */
> +res = 0;
> +if ((s->revision == 2 && !s->security_extn)
> +|| (s->security_extn && !ns_access())) {
> +/* In this case the GICD_CTRL contains both a group0 and 
> group1

Typo: should be _CTLR.

> + * enable bit, so we create the resuling value by aggregating
> + * the bits from the two enable values.
> + * The group0 enable bit is only visible to secure accesses.
> + * The group1 enable bit (bit[1]) is an alias of bit[0] in
> + * the non-secure copy (enabled_grp[1]).
> + */
> +res = (s->enabled_grp[1] << 1) | s->enabled_grp[0];
> +} else if (s->security_extn && ns_access()) {
> +res = s->enabled_grp[1];
> +} else {
> +/* Neither GICv2 nor Security Extensions present */
> +res = s->enabled;
> +}
> +}
>  if (offset == 4)
>  /* Interrupt Controller Type Register */
>  return ((s->num_irq / 32) - 1)
> @@ -471,8 +488,32 @@ static void gic_dist_writeb(void *opaque, hwaddr offset,
>  cpu = gic_get_current_cpu(s);
>  if (offset < 0x100) {
>  if (offset == 0) {
> -s->enabled = (value & 1);
> -DPRINTF("Distribution %sabled\n", s->enabled ? "En" : "Dis");
> +if ((s->revision == 2 && !s->security_extn)
> +|| (s->security_extn && !ns_access())) {
> +s->enabled_grp[0] = value & (1U << 0); /* EnableGrp0 */
> +/* For a GICv1 with Security Extn "EnableGrp1" is IMPDEF. */
> +/* We only use the first bit of the enabled_grp vars to
> + * indicate enabled or disabled.  In this case we have to 
> shift
> + * the incoming value down to the low bit because the group1
> + * enabled bit is bit[1] in the secure/GICv2 GICD_CTLR..

Typo: repeated '.'.

> + */
> +s->enabled_grp[1] = (value >> 1) & 0x1; /* EnableGrp1 */
> +DPRINTF("Group0 distribution %sabled\n"
> +"Group1 distribution %sabled\n",
> +s->enabled_grp[0] ? "En" : "Dis",
> +s->enabled_grp[1] ? "En" : "Dis");
> +} else if (s->security_extn && ns_access()) {
> +/* If we are non-secure only the group1 enable bit is visible
> + * as bit[0] in the GICD_CTLR.
> + */
> +s->enabled_grp[1] = (value & 0x1);
> +DPRINTF("Group1 distribution %sabled\n",
> +s->enabled_grp[1] ? "En" : "Dis");
> +} else {
> +/* Neither GICv2 nor Security Extensions present */
> +s->enabled = (value & 0x1);
> +DPRINTF("Distribution %sabled\n", s->enabled ? "En" : "Dis");
> +}
>  } else if (offset < 4) {
>  /* ignored.  */
>  } else if (offset >= 0x80) {
> diff --git a/hw/intc/arm_gic_common.c b/hw/intc/arm_gic_common.c
> index 28f3b2a..c44050d 100644
> --- a/hw/intc/arm_gic_common.c
> +++ b/hw/intc/arm_gic_common.c
> @@ -64,7 +64,7 @@ static const VMStateDescription vmstate_gic = {
>  .pre_save = gic_pre_save,
>  .post_load = gic_post_load,
>  .fields = (VMStateField[]) {
> -VMSTATE_BOOL(enabled, GICState),
> +VMSTATE_UINT8_ARRAY(enabled_grp, GICState, GIC_NR_GROUP),
>  VMSTATE_BOO

Re: [Qemu-devel] [PATCH v2 06/16] hw/intc/arm_gic: Add Interrupt Group Registers

2015-04-14 Thread Peter Maydell
On 30 October 2014 at 22:12, Greg Bellows  wrote:
> From: Fabian Aggeler 
>
> Interrupt Group Registers (previously called Interrupt Security
> Registers) as defined in GICv1 with Security Extensions or GICv2 allow
> to configure interrupts as Secure (Group0) or Non-secure (Group1).
> In GICv2 these registers are implemented independent of the existence of
> Security Extensions.

Worth mentioning in the commit that this patch only
implements the register accessors, not the functionality
that the bits control.

> Signed-off-by: Fabian Aggeler 
>
> ---
>
> v1 -> v2
> - Add clarifying comments to gic_dist_readb/writeb on interrupt group register
>   update
> - Swap GIC_SET_GROUP0/1 macro logic.  Setting the irq_state.group field for
>   group 0 should clear the bit not set it.  Similarly, setting the field for
>   group 1 should set the bit not clear it.
> ---
>  hw/intc/arm_gic.c| 49 
> +---
>  hw/intc/arm_gic_common.c |  1 +
>  hw/intc/gic_internal.h   |  4 
>  include/hw/intc/arm_gic_common.h |  1 +
>  4 files changed, 52 insertions(+), 3 deletions(-)
>
> diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
> index bee71a1..36ac188 100644
> --- a/hw/intc/arm_gic.c
> +++ b/hw/intc/arm_gic.c
> @@ -312,8 +312,27 @@ static uint32_t gic_dist_readb(void *opaque, hwaddr 
> offset)
>  if (offset < 0x08)
>  return 0;
>  if (offset >= 0x80) {
> -/* Interrupt Security , RAZ/WI */
> -return 0;
> +/* Interrupt Group Registers
> + *
> + * For GIC with Security Extn and Non-secure access RAZ/WI
> + * For GICv1 without Security Extn RAZ/WI
> + */
> +res = 0;
> +if (!(s->security_extn && ns_access()) &&
> +((s->revision == 1 && s->security_extn)
> +|| s->revision == 2)) {

It would probably be clearer to write this as
   if (whatever) {
   return 0;
   }
   if (whatever) {
   return 0;
   }
   logic for registers;

rather than inverting the conditions from their more
natural and readable sense.

Also I suspect we will want some utility functions. One
that springs to mind here would be a gic_has_groups()
which returns (gic is v2 || (gic is v1 && has security extns)).

> +/* Every byte offset holds 8 group status bits */
> +irq = (offset - 0x080) * 8 + GIC_BASE_IRQ;

Better written as 0x80 I think.

> +if (irq >= s->num_irq) {
> +goto bad_reg;
> +}
> +for (i = 0; i < 8; i++) {
> +if (!GIC_TEST_GROUP0(irq + i, cm)) {
> +res |= (1 << i);
> +}
> +}
> +}
> +return res;
>  }
>  goto bad_reg;
>  } else if (offset < 0x200) {
> @@ -457,7 +476,31 @@ static void gic_dist_writeb(void *opaque, hwaddr offset,
>  } else if (offset < 4) {
>  /* ignored.  */
>  } else if (offset >= 0x80) {
> -/* Interrupt Security Registers, RAZ/WI */
> +/* Interrupt Group Registers
> + *
> + * For GIC with Security Extn and Non-secure access RAZ/WI
> + * For GICv1 without Security Extn RAZ/WI
> + */
> +if (!(s->security_extn && ns_access()) &&
> +((s->revision == 1 && s->security_extn)
> +|| s->revision == 2)) {
> +/* Every byte offset holds 8 group status bits */
> +irq = (offset - 0x080) * 8 + GIC_BASE_IRQ;
> +if (irq >= s->num_irq) {
> +goto bad_reg;
> +}
> +for (i = 0; i < 8; i++) {
> +/* Group bits are banked for private interrupts 
> (internal)*/

Missing trailing space before */.

> +int cm = (irq < GIC_INTERNAL) ? (1 << cpu) : 
> ALL_CPU_MASK;
> +if (value & (1 << i)) {
> +/* Group1 (Non-secure) */
> +GIC_SET_GROUP1(irq + i, cm);
> +} else {
> +/* Group0 (Secure) */
> +GIC_SET_GROUP0(irq + i, cm);
> +}
> +}
> +}
>  } else {
>  goto bad_reg;
>  }
> diff --git a/hw/intc/arm_gic_common.c b/hw/intc/arm_gic_common.c
> index e35049d..28f3b2a 100644
> --- a/hw/intc/arm_gic_common.c
> +++ b/hw/intc/arm_gic_common.c
> @@ -52,6 +52,7 @@ static const VMStateDescription vmstate_gic_irq_state = {
>  VMSTATE_UINT8(level, gic_irq_state),
>  VMSTATE_BOOL(model, gic_irq_state),
>  VMSTATE_BOOL(edge_trigger, gic_irq_state),
> +VMSTATE_UINT8(group, gic_irq_state),

We want to bump the vmstate version at some point in this
series, but if we're making several

Re: [Qemu-devel] [PATCH v2 05/16] hw/intc/arm_gic: Add ns_access() function

2015-04-14 Thread Peter Maydell
On 30 October 2014 at 22:12, Greg Bellows  wrote:
> From: Fabian Aggeler 
>
> Security Extensions for GICv1 and GICv2 use register banking
> to provide transparent access to seperate Secure and Non-secure
> copies of GIC configuration registers. This function will later
> be replaced by code determining the security state of a read/write
> access to a register.
>
> Signed-off-by: Fabian Aggeler 
> ---
>  hw/intc/arm_gic.c | 7 +++
>  1 file changed, 7 insertions(+)
>
> diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
> index 0ee7778..bee71a1 100644
> --- a/hw/intc/arm_gic.c
> +++ b/hw/intc/arm_gic.c
> @@ -45,6 +45,13 @@ static inline int gic_get_current_cpu(GICState *s)
>  return 0;
>  }
>
> +/* Security state of a read / write access */
> +static inline bool ns_access(void)
> +{
> +/* TODO: use actual security state */
> +return true;
> +}

We can do this with the transaction attributes patchset now.
However this function and its callsites will need adjusting
because we need the MemTxAttrs value to answer the question.
(Given that the question is just "attrs.secure" we probably
don't need the wrapper unless we want to include in this
"accesses are always secure if the GIC doesn't implement
the security extensions" logic.)

-- PMM



Re: [Qemu-devel] [PATCH v2 04/16] hw/intc/arm_gic: Add Security Extensions property

2015-04-14 Thread Peter Maydell
On 30 October 2014 at 22:12, Greg Bellows  wrote:
> From: Fabian Aggeler 
>
> The existing implementation does not support Security Extensions mentioned
> in the GICv1 and GICv2 architecture specification. Security Extensions are
> not available on all GICs. This property makes it possible to enable Security 
> Extensions.
>
> It also makes GICD_TYPER/ICDICTR.SecurityExtn RAO for GICs which implement
> Security Extensions.
>
> Signed-off-by: Fabian Aggeler 
>
> ---
>
> v1 -> v2
> - Change GICState security extension property from a uint8 type to bool
> ---
>  hw/intc/arm_gic.c| 5 -
>  hw/intc/arm_gic_common.c | 1 +
>  include/hw/intc/arm_gic_common.h | 1 +
>  3 files changed, 6 insertions(+), 1 deletion(-)
>
> diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
> index ea05f8f..0ee7778 100644
> --- a/hw/intc/arm_gic.c
> +++ b/hw/intc/arm_gic.c
> @@ -298,7 +298,10 @@ static uint32_t gic_dist_readb(void *opaque, hwaddr 
> offset)
>  if (offset == 0)
>  return s->enabled;
>  if (offset == 4)
> -return ((s->num_irq / 32) - 1) | ((NUM_CPU(s) - 1) << 5);
> +/* Interrupt Controller Type Register */
> +return ((s->num_irq / 32) - 1)
> +| ((NUM_CPU(s) - 1) << 5)
> +| (s->security_extn << 10);
>  if (offset < 0x08)
>  return 0;
>  if (offset >= 0x80) {
> diff --git a/hw/intc/arm_gic_common.c b/hw/intc/arm_gic_common.c
> index 18b01ba..e35049d 100644
> --- a/hw/intc/arm_gic_common.c
> +++ b/hw/intc/arm_gic_common.c
> @@ -149,6 +149,7 @@ static Property arm_gic_common_properties[] = {
>   * (Internally, 0x also indicates "not a GIC but an NVIC".)
>   */
>  DEFINE_PROP_UINT32("revision", GICState, revision, 1),
> +DEFINE_PROP_BOOL("security-extn", GICState, security_extn, 0),

Could use a comment describing the property. Also, we should
make the name of the property be in line with what we picked
for board or CPU level TZ properties. I think that's "secure".

Trying to set this property on something that's not a GICv1
or GICv2 should cause an error at realize.

>  DEFINE_PROP_END_OF_LIST(),
>  };
>
> diff --git a/include/hw/intc/arm_gic_common.h 
> b/include/hw/intc/arm_gic_common.h
> index 01c6f24..7825134 100644
> --- a/include/hw/intc/arm_gic_common.h
> +++ b/include/hw/intc/arm_gic_common.h
> @@ -105,6 +105,7 @@ typedef struct GICState {
>  MemoryRegion cpuiomem[GIC_NCPU + 1]; /* CPU interfaces */
>  uint32_t num_irq;
>  uint32_t revision;
> +bool security_extn;
>  int dev_fd; /* kvm device fd if backed by kvm vgic support */
>  } GICState;

-- PMM



Re: [Qemu-devel] [PATCH v2 02/16] hw/arm/vexpress.c: Wire FIQ between CPU <> GIC

2015-04-14 Thread Peter Maydell
On 30 October 2014 at 22:11, Greg Bellows  wrote:
> From: Fabian Aggeler 
>
> Connect FIQ output of the GIC CPU interfaces to the CPUs.
>
> Signed-off-by: Fabian Aggeler 
> ---
>  hw/arm/vexpress.c | 2 ++
>  1 file changed, 2 insertions(+)
>
> diff --git a/hw/arm/vexpress.c b/hw/arm/vexpress.c
> index 7cbd13f..7121b8a 100644
> --- a/hw/arm/vexpress.c
> +++ b/hw/arm/vexpress.c
> @@ -229,6 +229,8 @@ static void init_cpus(const char *cpu_model, const char 
> *privdev,
>  DeviceState *cpudev = DEVICE(qemu_get_cpu(n));
>
>  sysbus_connect_irq(busdev, n, qdev_get_gpio_in(cpudev, ARM_CPU_IRQ));
> +sysbus_connect_irq(busdev, n+smp_cpus,
> +  qdev_get_gpio_in(cpudev, ARM_CPU_FIQ));
>  }
>  }

This and patch 3 aren't wrong, but there's probably other board
level wiring up to do (eg setting the "enable security extns"
property on the GIC object). We should do all the board level
changes last, after the GIC changes.

-- PMM



Re: [Qemu-devel] [PATCH v2 01/16] hw/intc/arm_gic: Request FIQ sources

2015-04-14 Thread Peter Maydell
On 30 October 2014 at 22:11, Greg Bellows  wrote:
> From: Fabian Aggeler 
>
> Preparing for FIQ lines from GIC to CPUs, which is needed for GIC
> Security Extensions.
>
> Signed-off-by: Fabian Aggeler 

(Yes, this is review on a six month old patchset. My
punishment for taking so long to get to this is that
I'm the one that's going to have to pick up this work
and fix the review issues :-))

> ---
>  hw/intc/arm_gic.c| 3 +++
>  include/hw/intc/arm_gic_common.h | 1 +
>  2 files changed, 4 insertions(+)
>
> diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
> index 270ce05..ea05f8f 100644
> --- a/hw/intc/arm_gic.c
> +++ b/hw/intc/arm_gic.c
> @@ -789,6 +789,9 @@ void gic_init_irqs_and_distributor(GICState *s)
>  for (i = 0; i < NUM_CPU(s); i++) {
>  sysbus_init_irq(sbd, &s->parent_irq[i]);
>  }
> +for (i = 0; i < NUM_CPU(s); i++) {
> +sysbus_init_irq(sbd, &s->parent_fiq[i]);
> +}
>  memory_region_init_io(&s->iomem, OBJECT(s), &gic_dist_ops, s,
>"gic_dist", 0x1000);
>  }
> diff --git a/include/hw/intc/arm_gic_common.h 
> b/include/hw/intc/arm_gic_common.h
> index f6887ed..01c6f24 100644
> --- a/include/hw/intc/arm_gic_common.h
> +++ b/include/hw/intc/arm_gic_common.h
> @@ -50,6 +50,7 @@ typedef struct GICState {
>  /*< public >*/
>
>  qemu_irq parent_irq[GIC_NCPU];
> +qemu_irq parent_fiq[GIC_NCPU];
>  bool enabled;
>  bool cpu_enabled[GIC_NCPU];

This is OK, but we need to init the new irq lines in
arm_gic_kvm.c too, to keep them with the same interface
to the rest of QEMU.

-- PMM



Re: [Qemu-devel] feature proposal: checkpoint-assisted migration

2015-04-14 Thread Dr. David Alan Gilbert
* Thomas Knauth (thomas.kna...@googlemail.com) wrote:
> Dear list,
> 
> my research revolves around cloud computing, virtual machines and
> migration. In this context I came across the following: a recent study
> by IBM indicates that a typical VM only migrates between a small set
> of physical servers; often just two.
> 
> The potential for optimization is clear. By storing a snapshot of the
> VM's memory on the migration source, we can reuse (some) of this
> information on a subsequent incoming migration.
> 
> In the course of our research we implemented a prototype of this
> feature within kvm/qemu. We would like to contribute it to mainline,
> but it needs cleanup and proper testing. As is the nature with
> research prototypes, the code is ugly and not well integrated with the
> existing kvm/qemu codebase. To avoid confusion and irritation, I want
> to mention that I have little experience in contributing to large
> open-source projects. So if I violate some unwritten protocol or best
> practises, please be patient.
> 
> Initially, I'm hoping to get some feedback on the current state of the
> implementation. It would be immensely helpful if someone more
> intimately familiar with the migration code/framework could comment on
> the prototyp's current state. The code very likely needs restructuring
> to make it fit better into the overall codebase. Getting information
> on what needs to change and how to change it would be my goal.
> 
> The prototype also touches the migration protocol. Changes in this
> part probably need discussion. The basic idea is that if a block of
> memory (e.g., a 4 KiB page) already exists at the migration
> destination, than the source only sends a checksum of the block
> (currently MD5). The destination uses the checksum to find the
> corresponding block, e.g., by reading it from local storage (instead
> of transferring it over the network). This definitely reduces the
> migration traffic and usually also the overall migration time.
> 
> We currently use MD5 checksums to identify (un)modified blocks. For
> strict ping-pong migration, where a VM only migrates between two
> servers, there is also the possibility to use dirty page tracking to
> identify modified pages. This has not been implemented so far. We are
> also unclear about the potential performance tradeoffs this might
> entail and how it would interact with the dirty page tracking code
> during a live migration.

I like your basic idea, and I kind of agreed with your argument that
if it's good enough for rsync then it's good enough; however, then I
found:
  https://github.com/therealmik/rsync-collision

which complicates the argument!  Those are 700byte blocks, so I guess
the chance of a collision on a 4kB page must be less likely; but I'd
want a crypto guy to say what was actually safeish.

> Our research also includes a look at real world data to motivate that
> this optimization actually does make sense in practise. If you are
> interested, you can find a draft of the relevant paper at:
> 
> https://www.dropbox.com/s/v7qzim8exmji6j5/paper.pdf?dl=0
> 
> Keep in mind that the paper is not published yet and, hence, work in progress.
> 
> As you can see, there are many open/unanswered questions, but I'm
> hopeful that this feature will eventually become part of kvm/qemu such
> that everyone can benefit from it.
> 
> Please find the current code at
> https://bitbucket.org/tknauth/vecycle-qemu/branch/checkpoint-assisted-migration

That asks me to login just to read it; which is a very odd thing
to have.I suggest you post your code to the list, with basically
this message at the start of the series, saying it's very new and you
expect it needs lots of changes, that way people can more easily
look at it.

Dave

> 
> Looking forward to your feedback,
> Thomas.
> 
--
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK



Re: [Qemu-devel] [PATCH v6 27/47] MIGRATION_STATUS_POSTCOPY_ACTIVE: Add new migration state

2015-04-14 Thread Dr. David Alan Gilbert
* Eric Blake (ebl...@redhat.com) wrote:
> On 04/14/2015 11:03 AM, Dr. David Alan Gilbert (git) wrote:
> > From: "Dr. David Alan Gilbert" 
> > 
> > 'MIGRATION_STATUS_POSTCOPY_ACTIVE' is entered after migrate_start_postcopy
> > 
> > 'migration_postcopy_phase' is provided for other sections to know if
> > they're in postcopy.
> > 
> > Signed-off-by: Dr. David Alan Gilbert 
> > Reviewed-by: David Gibson 
> > ---
> >  include/migration/migration.h |  2 ++
> >  migration/migration.c | 56 
> > ---
> >  qapi-schema.json  |  4 +++-
> >  trace-events  |  1 +
> >  4 files changed, 54 insertions(+), 9 deletions(-)
> > 
> 
> > +++ b/qapi-schema.json
> > @@ -424,6 +424,8 @@
> >  #
> >  # @active: in the process of doing migration.
> >  #
> > +# @postcopy-active: as active, but now in postcopy mode.
> > +#
> 
> s/as/like/
> Needs a (since 2.4) designation.
> 
> Minor enough that I'm okay if you fix them and add:
> Reviewed-by: Eric Blake 

Done.  Thanks.

Dave
> 
> -- 
> Eric Blake   eblake redhat com+1-919-301-3266
> Libvirt virtualization library http://libvirt.org
> 


--
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK



Re: [Qemu-devel] [RFC PATCH] vl.c: add -semihosting-config "arg" sub-argument

2015-04-14 Thread Liviu Ionescu

> On 08 Apr 2015, at 19:20, Leon Alrae  wrote:
> 
> ... I do understand
> however that in your particular case cmdline is more convenient, thus I
> personally don’t mind having both options: the user-friendly cmdline and
> more flexible arg.

I was a bit optimistic with the first implementation of "--semihosting-cmdline 
string", since this approach complicates things when calling the emulator from 
a script.

a normal script might do something like

~~~
# identify own args and use shift to 'eat' them
exec qemu ... --semihosting-cmdline $@
~~~

the first thought was to transform the $@ array into a single string with 
quotes, like "$@", but this does not work properly if the arguments already 
include spaces.

so, after long considerations, I had to revert to the very first idea, to use a 
variable number args option, which, obviously, cannot be placed anywhere else 
but at the end.

the new implementation is available from:

  https://sourceforge.net/p/gnuarmeclipse/qemu/ci/gnuarmeclipse-dev/tree/vl.c

the option is identified in the first parsing loop, args are stored and argc is 
diminished accordingly, for the second pass to work without any changes.

one more thing to note is the concatenate_semihosting_cmdline() function, which 
is not as simple as Leon suggested in the original post, since it has to add 
quotes to arguments containing spaces.

I updated the Eclipse plug-in to use this mechanism and there were no problems. 
passing the args from a script is also pretty easy, so to me this looks like 
the right solution.


any comments?


Livius








Re: [Qemu-devel] [PATCH v6 27/47] MIGRATION_STATUS_POSTCOPY_ACTIVE: Add new migration state

2015-04-14 Thread Eric Blake
On 04/14/2015 11:03 AM, Dr. David Alan Gilbert (git) wrote:
> From: "Dr. David Alan Gilbert" 
> 
> 'MIGRATION_STATUS_POSTCOPY_ACTIVE' is entered after migrate_start_postcopy
> 
> 'migration_postcopy_phase' is provided for other sections to know if
> they're in postcopy.
> 
> Signed-off-by: Dr. David Alan Gilbert 
> Reviewed-by: David Gibson 
> ---
>  include/migration/migration.h |  2 ++
>  migration/migration.c | 56 
> ---
>  qapi-schema.json  |  4 +++-
>  trace-events  |  1 +
>  4 files changed, 54 insertions(+), 9 deletions(-)
> 

> +++ b/qapi-schema.json
> @@ -424,6 +424,8 @@
>  #
>  # @active: in the process of doing migration.
>  #
> +# @postcopy-active: as active, but now in postcopy mode.
> +#

s/as/like/
Needs a (since 2.4) designation.

Minor enough that I'm okay if you fix them and add:
Reviewed-by: Eric Blake 

-- 
Eric Blake   eblake redhat com+1-919-301-3266
Libvirt virtualization library http://libvirt.org



signature.asc
Description: OpenPGP digital signature


Re: [Qemu-devel] [PATCH v6 26/47] migrate_start_postcopy: Command to trigger transition to postcopy

2015-04-14 Thread Dr. David Alan Gilbert
* Eric Blake (ebl...@redhat.com) wrote:
> On 04/14/2015 11:03 AM, Dr. David Alan Gilbert (git) wrote:
> > From: "Dr. David Alan Gilbert" 
> > 
> > Once postcopy is enabled (with migrate_set_capability), the migration
> > will still start on precopy mode.  To cause a transition into postcopy
> > the:
> > 
> >   migrate_start_postcopy
> > 
> > command must be issued.  Postcopy will start sometime after this
> > (when it's next checked in the migration loop).
> > 
> > Issuing the command before migration has started will error,
> > and issuing after it has finished is ignored.
> > 
> > Signed-off-by: Dr. David Alan Gilbert 
> > Reviewed-by: Eric Blake 
> > ---
> 
> > +++ b/qapi-schema.json
> > @@ -566,6 +566,14 @@
> >  { 'command': 'query-migrate-capabilities', 'returns':   
> > ['MigrationCapabilityStatus']}
> >  
> >  ##
> > +# @migrate-start-postcopy
> > +#
> > +# Switch migration to postcopy mode
> > +#
> > +# Since: 2.3
> 
> 2.4

Thanks, fixed.

Dave

> > +{ 'command': 'migrate-start-postcopy' }
> > +
> 
> As that's easily fixable by the maintainer, my R-b stands.
> 
> -- 
> Eric Blake   eblake redhat com+1-919-301-3266
> Libvirt virtualization library http://libvirt.org
> 


--
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK



Re: [Qemu-devel] [PATCH v6 26/47] migrate_start_postcopy: Command to trigger transition to postcopy

2015-04-14 Thread Eric Blake
On 04/14/2015 11:03 AM, Dr. David Alan Gilbert (git) wrote:
> From: "Dr. David Alan Gilbert" 
> 
> Once postcopy is enabled (with migrate_set_capability), the migration
> will still start on precopy mode.  To cause a transition into postcopy
> the:
> 
>   migrate_start_postcopy
> 
> command must be issued.  Postcopy will start sometime after this
> (when it's next checked in the migration loop).
> 
> Issuing the command before migration has started will error,
> and issuing after it has finished is ignored.
> 
> Signed-off-by: Dr. David Alan Gilbert 
> Reviewed-by: Eric Blake 
> ---

> +++ b/qapi-schema.json
> @@ -566,6 +566,14 @@
>  { 'command': 'query-migrate-capabilities', 'returns':   
> ['MigrationCapabilityStatus']}
>  
>  ##
> +# @migrate-start-postcopy
> +#
> +# Switch migration to postcopy mode
> +#
> +# Since: 2.3

2.4

> +{ 'command': 'migrate-start-postcopy' }
> +

As that's easily fixable by the maintainer, my R-b stands.

-- 
Eric Blake   eblake redhat com+1-919-301-3266
Libvirt virtualization library http://libvirt.org



signature.asc
Description: OpenPGP digital signature


[Qemu-devel] Cross-Compiling Static QEMU for Windows

2015-04-14 Thread Liviu Ionescu
> [1] http://gnuarmeclipse.livius.net/wiki/How_to_build_QEMU

this page is quite old, there is a new one in my wiki, and in the latest 
Windows builds I no longer use the --static option, but... I don't remember 
exactly the reason for this...

regards,

Liviu




[Qemu-devel] [Bug 1438144] Re: Page sizes are not interpreted correctly for E500/E500MC

2015-04-14 Thread WGH
You're absolutely right. Sorry for bothering.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1438144

Title:
  Page sizes are not interpreted correctly for E500/E500MC

Status in QEMU:
  Invalid

Bug description:
  http://cache.freescale.com/files/32bit/doc/ref_manual/E500CORERM.pdf - see 
2.12.5.2 MAS Register 1 (MAS1), p. 2-41
  http://cache.freescale.com/files/32bit/doc/ref_manual/E500MCRM.pdf - see 
2.16.6.2 MAS Register 1 (MAS1), p. 2-54

  According to these documents, variable page size for TLB1 is computed
  as 4K ** TSIZE.

  However, QEMU always treats it as if it was 1K << TSIZE, even if
  options like "-cpu e500mc" are supplied to qemu.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1438144/+subscriptions



[Qemu-devel] [Bug 1444081] [NEW] x86_64 heavy crash on PPC 64 host

2015-04-14 Thread luigiburdo
Public bug reported:

this appened to me with last 2.3.0 rc 2
qemu-system-x86-64 crash  , with only 2047 or 1024 -m option and -hda set

qemu: fatal: Trying to execute code outside RAM or ROM at
0x00181f9a000a

EAX= EBX= ECX= EDX=0663
ESI= EDI= EBP= ESP=
EIP=0009fff3 EFL=0046 [---Z-P-] CPL=0 II=0 A20=1 SMM=0 HLT=0
ES =   9300
CS =f000   9b00
SS =   9300
DS =   9300
FS =   9300
GS =   9300
LDT=   8200
TR =   8b00
GDT=  
IDT=  
CR0=6010 CR2= CR3= CR4=
DR0= DR1= DR2= 
DR3=
DR6=0ff0 DR7=0400
CCS= CCD= CCO=ADDB
EFER=
FCW=037f FSW= [ST=0] FTW=00 MXCSR=1f80
FPR0=  FPR1= 
FPR2=  FPR3= 
FPR4=  FPR5= 
FPR6=  FPR7= 
XMM00= XMM01=
XMM02= XMM03=
XMM04= XMM05=
XMM06= XMM07=
Annullato (core dump creato)

Keep a good work

My machine host
G5 Quad , radeon hd 6570 2gb , 8gb ram ...
host OS Lubuntu 14.04.2

** Affects: qemu
 Importance: Undecided
 Status: New


** Tags: crash host ppc x86

** Description changed:

  this appened to me with last 2.3.0 rc 2
  
  qemu: fatal: Trying to execute code outside RAM or ROM at
  0x00181f9a000a
  
  EAX= EBX= ECX= EDX=0663
  ESI= EDI= EBP= ESP=
  EIP=0009fff3 EFL=0046 [---Z-P-] CPL=0 II=0 A20=1 SMM=0 HLT=0
  ES =   9300
  CS =f000   9b00
  SS =   9300
  DS =   9300
  FS =   9300
  GS =   9300
  LDT=   8200
  TR =   8b00
  GDT=  
  IDT=  
  CR0=6010 CR2= CR3= CR4=
- DR0= DR1= DR2= 
DR3= 
+ DR0= DR1= DR2= 
DR3=
  DR6=0ff0 DR7=0400
- CCS= CCD= CCO=ADDB
+ CCS= CCD= CCO=ADDB
  EFER=
  FCW=037f FSW= [ST=0] FTW=00 MXCSR=1f80
  FPR0=  FPR1= 
  FPR2=  FPR3= 
  FPR4=  FPR5= 
  FPR6=  FPR7= 
  XMM00= XMM01=
  XMM02= XMM03=
  XMM04= XMM05=
  XMM06= XMM07=
  Annullato (core dump creato)
  
+ Keep a good work
  
- Keep a good work
+ My machine host 
+ G5 Quad , radeon hd 6570 2gb , 8gb ram ...
+ host OS Lubuntu 14.04.2

** Description changed:

  this appened to me with last 2.3.0 rc 2
+ qemu-system-x86-64 crash  , with only 2047 or 1024 -m option and -hda set
  
  qemu: fatal: Trying to execute code outside RAM or ROM at
  0x00181f9a000a
  
  EAX= EBX= ECX= EDX=0663
  ESI= EDI= EBP= ESP=
  EIP=0009fff3 EFL=0046 [---Z-P-] CPL=0 II=0 A20=1 SMM=0 HLT=0
  ES =   9300
  CS =f000   9b00
  SS =   9300
  DS =   9300
  FS =   9300
  GS =   9300
  LDT=   8200
  TR =   8b00
  GDT=  
  IDT=  
  CR0=6010 CR2= CR3= CR4=
  DR0= DR1= DR2= 
DR3=
  DR6=0ff0 DR7=0400
  CCS= CCD= CCO=ADDB
  EFER=
  FCW=037f FSW= [ST=0] FTW=00 MXCSR=1f80
  FPR0=  FPR1= 
  FPR2=  FPR3= 
  FPR4=  FPR5= 
  FPR6=  FPR7= 
  XMM00= XMM01=
  XMM02= XM

[Qemu-devel] [PATCH v6 47/47] Inhibit ballooning during postcopy

2015-04-14 Thread Dr. David Alan Gilbert (git)
From: "Dr. David Alan Gilbert" 

Postcopy detects accesses to pages that haven't been transferred yet
using userfaultfd, and it causes exceptions on pages that are 'not
present'.
Ballooning also causes pages to be marked as 'not present' when the
guest inflates the balloon.
Potentially a balloon could be inflated to discard pages that are
currently inflight during postcopy and that may be arriving at about
the same time.

To avoid this confusion, disable ballooning during postcopy.

When disabled we drop balloon requests from the guest.  Since ballooning
is generally initiated by the host, the management system should avoid
initiating any balloon instructions to the guest during migration,
although it's not possible to know how long it would take a guest to
process a request made prior to the start of migration.

Queueing the requests until after migration would be nice, but is
non-trivial, since the set of inflate/deflate requests have to
be compared with the state of the page to know what the final
outcome is allowed to be.

Signed-off-by: Dr. David Alan Gilbert 
---
 balloon.c  | 11 +++
 hw/virtio/virtio-balloon.c |  4 +++-
 include/sysemu/balloon.h   |  2 ++
 migration/postcopy-ram.c   |  9 +
 4 files changed, 25 insertions(+), 1 deletion(-)

diff --git a/balloon.c b/balloon.c
index 70c00f5..0274df8 100644
--- a/balloon.c
+++ b/balloon.c
@@ -35,6 +35,17 @@
 static QEMUBalloonEvent *balloon_event_fn;
 static QEMUBalloonStatus *balloon_stat_fn;
 static void *balloon_opaque;
+static bool balloon_inhibited;
+
+bool qemu_balloon_is_inhibited(void)
+{
+return balloon_inhibited;
+}
+
+void qemu_balloon_inhibit(bool state)
+{
+balloon_inhibited = state;
+}
 
 static bool have_balloon(Error **errp)
 {
diff --git a/hw/virtio/virtio-balloon.c b/hw/virtio/virtio-balloon.c
index 95b0643..8bb93db 100644
--- a/hw/virtio/virtio-balloon.c
+++ b/hw/virtio/virtio-balloon.c
@@ -37,9 +37,11 @@
 static void balloon_page(void *addr, int deflate)
 {
 #if defined(__linux__)
-if (!kvm_enabled() || kvm_has_sync_mmu())
+if (!qemu_balloon_is_inhibited() && (!kvm_enabled() ||
+ kvm_has_sync_mmu())) {
 qemu_madvise(addr, TARGET_PAGE_SIZE,
 deflate ? QEMU_MADV_WILLNEED : QEMU_MADV_DONTNEED);
+}
 #endif
 }
 
diff --git a/include/sysemu/balloon.h b/include/sysemu/balloon.h
index 0345e01..6851d99 100644
--- a/include/sysemu/balloon.h
+++ b/include/sysemu/balloon.h
@@ -23,5 +23,7 @@ typedef void (QEMUBalloonStatus)(void *opaque, BalloonInfo 
*info);
 int qemu_add_balloon_handler(QEMUBalloonEvent *event_func,
 QEMUBalloonStatus *stat_func, void *opaque);
 void qemu_remove_balloon_handler(void *opaque);
+bool qemu_balloon_is_inhibited(void);
+void qemu_balloon_inhibit(bool state);
 
 #endif
diff --git a/migration/postcopy-ram.c b/migration/postcopy-ram.c
index ddf0841..50ce6eb 100644
--- a/migration/postcopy-ram.c
+++ b/migration/postcopy-ram.c
@@ -24,6 +24,7 @@
 #include "migration/migration.h"
 #include "migration/postcopy-ram.h"
 #include "sysemu/sysemu.h"
+#include "sysemu/balloon.h"
 #include "qemu/error-report.h"
 #include "trace.h"
 
@@ -316,6 +317,8 @@ int postcopy_ram_incoming_cleanup(MigrationIncomingState 
*mis)
 mis->have_fault_thread = false;
 }
 
+qemu_balloon_inhibit(false);
+
 if (enable_mlock) {
 if (os_mlock() < 0) {
 error_report("mlock: %s", strerror(errno));
@@ -514,6 +517,12 @@ int postcopy_ram_enable_notify(MigrationIncomingState *mis)
 return -1;
 }
 
+/*
+ * Ballooning can mark pages as absent while we're postcopying
+ * that would cause false userfaults.
+ */
+qemu_balloon_inhibit(true);
+
 trace_postcopy_ram_enable_notify();
 
 return 0;
-- 
2.1.0




[Qemu-devel] [PATCH v6 45/47] End of migration for postcopy

2015-04-14 Thread Dr. David Alan Gilbert (git)
From: "Dr. David Alan Gilbert" 

Tweak the end of migration cleanup; we don't want to close stuff down
at the end of the main stream, since the postcopy is still sending pages
on the other thread.

Signed-off-by: Dr. David Alan Gilbert 
---
 migration/migration.c | 25 -
 trace-events  |  2 ++
 2 files changed, 26 insertions(+), 1 deletion(-)

diff --git a/migration/migration.c b/migration/migration.c
index 6537d23..180c8b0 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -160,12 +160,35 @@ static void process_incoming_migration_co(void *opaque)
 {
 QEMUFile *f = opaque;
 Error *local_err = NULL;
+MigrationIncomingState *mis;
+PostcopyState ps;
 int ret;
 
-migration_incoming_state_new(f);
+mis = migration_incoming_state_new(f);
 
 ret = qemu_loadvm_state(f);
 
+ps = postcopy_state_get(mis);
+trace_process_incoming_migration_co_end(ret, ps);
+if (ps != POSTCOPY_INCOMING_NONE) {
+if (ps == POSTCOPY_INCOMING_ADVISE) {
+/*
+ * Where a migration had postcopy enabled (and thus went to advise)
+ * but managed to complete within the precopy period, we can use
+ * the normal exit.
+ */
+postcopy_ram_incoming_cleanup(mis);
+} else if (ret >= 0) {
+/*
+ * Postcopy was started, cleanup should happen at the end of the
+ * postcopy thread.
+ */
+trace_process_incoming_migration_co_postcopy_end_main();
+return;
+}
+/* Else if something went wrong then just fall out of the normal exit 
*/
+}
+
 qemu_fclose(f);
 free_xbzrle_decoded_buf();
 migration_incoming_state_destroy();
diff --git a/trace-events b/trace-events
index 1ab9079..1378992 100644
--- a/trace-events
+++ b/trace-events
@@ -1435,6 +1435,8 @@ source_return_path_thread_loop_top(void) ""
 source_return_path_thread_pong(uint32_t val) "%x"
 source_return_path_thread_shut(uint32_t val) "%x"
 migrate_transferred(uint64_t tranferred, uint64_t time_spent, double 
bandwidth, uint64_t size) "transferred %" PRIu64 " time_spent %" PRIu64 " 
bandwidth %g max_size %" PRId64
+process_incoming_migration_co_end(int ret, int ps) "ret=%d postcopy-state=%d"
+process_incoming_migration_co_postcopy_end_main(void) ""
 
 # migration/rdma.c
 qemu_dma_accept_incoming_migration(void) ""
-- 
2.1.0




[Qemu-devel] [PATCH v6 44/47] postcopy: Wire up loadvm_postcopy_handle_ commands

2015-04-14 Thread Dr. David Alan Gilbert (git)
From: "Dr. David Alan Gilbert" 

Wire up more of the handlers for the commands on the destination side,
in particular loadvm_postcopy_handle_run now has enough to start the
guest running.

Signed-off-by: Dr. David Alan Gilbert 
---
 savevm.c | 29 -
 trace-events |  2 ++
 2 files changed, 30 insertions(+), 1 deletion(-)

diff --git a/savevm.c b/savevm.c
index ce8c3b5..a1fabb5 100644
--- a/savevm.c
+++ b/savevm.c
@@ -1360,12 +1360,34 @@ static int 
loadvm_postcopy_handle_listen(MigrationIncomingState *mis)
 static int loadvm_postcopy_handle_run(MigrationIncomingState *mis)
 {
 PostcopyState ps = postcopy_state_set(mis, POSTCOPY_INCOMING_RUNNING);
+Error *local_err = NULL;
+
 trace_loadvm_postcopy_handle_run();
 if (ps != POSTCOPY_INCOMING_LISTENING) {
 error_report("CMD_POSTCOPY_RUN in wrong postcopy state (%d)", ps);
 return -1;
 }
 
+/* TODO we should move all of this lot into postcopy_ram.c or a shared code
+ * in migration.c
+ */
+cpu_synchronize_all_post_init();
+
+qemu_announce_self();
+
+/* Make sure all file formats flush their mutable metadata */
+bdrv_invalidate_cache_all(&local_err);
+if (local_err) {
+qerror_report_err(local_err);
+error_free(local_err);
+return -1;
+}
+
+trace_loadvm_postcopy_handle_run_cpu_sync();
+cpu_synchronize_all_post_init();
+
+trace_loadvm_postcopy_handle_run_vmstart();
+
 if (autostart) {
 /* Hold onto your hats, starting the CPU */
 vm_start();
@@ -1374,7 +1396,12 @@ static int 
loadvm_postcopy_handle_run(MigrationIncomingState *mis)
 runstate_set(RUN_STATE_PAUSED);
 }
 
-return 0;
+/* We need to finish reading the stream from the package
+ * and also stop reading anything more from the stream that loaded the
+ * package (since it's now being read by the listener thread).
+ * LOADVM_QUIT will quit all the layers of nested loadvm loops.
+ */
+return LOADVM_QUIT;
 }
 
 static int loadvm_process_command_simple_lencheck(const char *name,
diff --git a/trace-events b/trace-events
index 2f50cc4..1ab9079 100644
--- a/trace-events
+++ b/trace-events
@@ -1182,6 +1182,8 @@ loadvm_handle_cmd_packaged_received(int ret) "%d"
 loadvm_postcopy_handle_advise(void) ""
 loadvm_postcopy_handle_listen(void) ""
 loadvm_postcopy_handle_run(void) ""
+loadvm_postcopy_handle_run_cpu_sync(void) ""
+loadvm_postcopy_handle_run_vmstart(void) ""
 loadvm_postcopy_ram_handle_discard(void) ""
 loadvm_postcopy_ram_handle_discard_end(void) ""
 loadvm_postcopy_ram_handle_discard_header(const char *ramid, uint16_t len) 
"%s: %ud"
-- 
2.1.0




[Qemu-devel] [PATCH v6 42/47] Postcopy; Handle userfault requests

2015-04-14 Thread Dr. David Alan Gilbert (git)
From: "Dr. David Alan Gilbert" 

userfaultfd is a Linux syscall that gives an fd that receives a stream
of notifications of accesses to pages registered with it and allows
the program to acknowledge those stalls and tell the accessing
thread to carry on.

Signed-off-by: Dr. David Alan Gilbert 
---
 include/migration/migration.h |   4 +
 migration/postcopy-ram.c  | 165 +++---
 trace-events  |   9 +++
 3 files changed, 169 insertions(+), 9 deletions(-)

diff --git a/include/migration/migration.h b/include/migration/migration.h
index db06fd2..4d6f33a 100644
--- a/include/migration/migration.h
+++ b/include/migration/migration.h
@@ -75,11 +75,15 @@ struct MigrationIncomingState {
  */
 QemuEvent  main_thread_load_event;
 
+bool   have_fault_thread;
 QemuThread fault_thread;
 QemuSemaphore  fault_thread_sem;
 
 /* For the kernel to send us notifications */
 intuserfault_fd;
+/* To tell the fault_thread to quit */
+intuserfault_quit_fd;
+
 QEMUFile *return_path;
 QemuMutex  rp_mutex;/* We send replies from multiple threads */
 void  *postcopy_tmp_page;
diff --git a/migration/postcopy-ram.c b/migration/postcopy-ram.c
index 33aadbc..b2dc3b7 100644
--- a/migration/postcopy-ram.c
+++ b/migration/postcopy-ram.c
@@ -49,6 +49,8 @@ struct PostcopyDiscardState {
  */
 #if defined(__linux__)
 
+#include 
+#include 
 #include 
 #include 
 #include 
@@ -273,15 +275,41 @@ int postcopy_ram_incoming_init(MigrationIncomingState 
*mis, size_t ram_pages)
  */
 int postcopy_ram_incoming_cleanup(MigrationIncomingState *mis)
 {
-/* TODO: Join the fault thread once we're sure it will exit */
-if (qemu_ram_foreach_block(cleanup_area, mis)) {
-return -1;
+trace_postcopy_ram_incoming_cleanup_entry();
+
+if (mis->have_fault_thread) {
+uint64_t tmp64;
+
+if (qemu_ram_foreach_block(cleanup_area, mis)) {
+return -1;
+}
+/*
+ * Tell the fault_thread to exit, it's an eventfd that should
+ * currently be at 0, we're going to inc it to 1
+ */
+tmp64 = 1;
+if (write(mis->userfault_quit_fd, &tmp64, 8) == 8) {
+trace_postcopy_ram_incoming_cleanup_join();
+qemu_thread_join(&mis->fault_thread);
+} else {
+/* Not much we can do here, but may as well report it */
+error_report("%s: incing userfault_quit_fd: %s", __func__,
+ strerror(errno));
+}
+trace_postcopy_ram_incoming_cleanup_closeuf();
+close(mis->userfault_fd);
+close(mis->userfault_quit_fd);
+mis->have_fault_thread = false;
 }
 
+postcopy_state_set(mis, POSTCOPY_INCOMING_END);
+migrate_send_rp_shut(mis, qemu_file_get_error(mis->file) != 0);
+
 if (mis->postcopy_tmp_page) {
 munmap(mis->postcopy_tmp_page, getpagesize());
 mis->postcopy_tmp_page = NULL;
 }
+trace_postcopy_ram_incoming_cleanup_exit();
 return 0;
 }
 
@@ -320,31 +348,150 @@ static int ram_block_enable_notify(const char 
*block_name, void *host_addr,
 static void *postcopy_ram_fault_thread(void *opaque)
 {
 MigrationIncomingState *mis = (MigrationIncomingState *)opaque;
-
-fprintf(stderr, "postcopy_ram_fault_thread\n");
-/* TODO: In later patch */
+uint64_t hostaddr; /* The kernel always gives us 64 bit, not a pointer */
+int ret;
+size_t hostpagesize = getpagesize();
+RAMBlock *rb = NULL;
+RAMBlock *last_rb = NULL; /* last RAMBlock we sent part of */
+uint8_t *local_tmp_page;
+
+trace_postcopy_ram_fault_thread_entry();
 qemu_sem_post(&mis->fault_thread_sem);
-while (1) {
-/* TODO: In later patch */
+
+local_tmp_page = mmap(NULL, getpagesize(),
+  PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS,
+  -1, 0);
+if (!local_tmp_page) {
+error_report("%s mapping local tmp page: %s", __func__,
+ strerror(errno));
+return NULL;
 }
+if (madvise(local_tmp_page, getpagesize(), MADV_DONTFORK)) {
+munmap(local_tmp_page, getpagesize());
+error_report("%s postcopy local page DONTFORK: %s", __func__,
+ strerror(errno));
+return NULL;
+}
+
+while (true) {
+ram_addr_t rb_offset;
+ram_addr_t in_raspace;
+struct pollfd pfd[2];
+
+/*
+ * We're mainly waiting for the kernel to give us a faulting HVA,
+ * however we can be told to quit via userfault_quit_fd which is
+ * an eventfd
+ */
+pfd[0].fd = mis->userfault_fd;
+pfd[0].events = POLLIN;
+pfd[0].revents = 0;
+pfd[1].fd = mis->userfault_quit_fd;
+pfd[1].events = POLLIN; /* Waiting for eventfd to go positive */
+pfd[1].revents = 0;
+
+if (poll(pfd, 2, -1

[Qemu-devel] [PATCH v6 41/47] Host page!=target page: Cleanup bitmaps

2015-04-14 Thread Dr. David Alan Gilbert (git)
From: "Dr. David Alan Gilbert" 

Prior to the start of postcopy, ensure that everything that will
be transferred later is a whole host-page in size.

This is accomplished by discarding partially transferred host pages
and marking any that are partially dirty as fully dirty.

Signed-off-by: Dr. David Alan Gilbert 
---
 arch_init.c | 271 +++-
 1 file changed, 269 insertions(+), 2 deletions(-)

diff --git a/arch_init.c b/arch_init.c
index dc672bf..18253af 100644
--- a/arch_init.c
+++ b/arch_init.c
@@ -850,7 +850,6 @@ static int ram_find_and_save_block(QEMUFile *f, bool 
last_stage,
 int pages = 0;
 ram_addr_t dirty_ram_abs; /* Address of the start of the dirty page in
  ram_addr_t space */
-unsigned long hps = sysconf(_SC_PAGESIZE);
 
 if (!block) {
 block = QLIST_FIRST_RCU(&ram_list.blocks);
@@ -867,7 +866,8 @@ static int ram_find_and_save_block(QEMUFile *f, bool 
last_stage,
  *   b) The last sent item was the last target-page in a host page
  */
 if (last_was_from_queue || !last_sent_block ||
-((last_offset & (hps - 1)) == (hps - TARGET_PAGE_SIZE))) {
+((last_offset & ~qemu_host_page_mask) ==
+ (qemu_host_page_size - TARGET_PAGE_SIZE))) {
 tmpblock = ram_save_unqueue_page(ms, &tmpoffset, &dirty_ram_abs);
 }
 
@@ -1152,6 +1152,265 @@ static int 
postcopy_each_ram_send_discard(MigrationState *ms)
 }
 
 /*
+ * Helper for postcopy_chunk_hostpages where HPS/TPS >= bits-in-long
+ *
+ * !! Untested !!
+ */
+static int hostpage_big_chunk_helper(const char *block_name, void *host_addr,
+ ram_addr_t offset, ram_addr_t length,
+ void *opaque)
+{
+MigrationState *ms = opaque;
+unsigned long long_bits = sizeof(long) * 8;
+unsigned int host_len = (qemu_host_page_size / TARGET_PAGE_SIZE) /
+long_bits;
+unsigned long first_long, last_long, cur_long, current_hp;
+unsigned long first = offset >> TARGET_PAGE_BITS;
+unsigned long last = (offset + (length - 1)) >> TARGET_PAGE_BITS;
+
+PostcopyDiscardState *pds = postcopy_discard_send_init(ms,
+   first,
+   block_name);
+first_long = first / long_bits;
+last_long = last / long_bits;
+
+/*
+ * I'm assuming RAMBlocks must start at the start of host pages,
+ * but I guess they might not use the whole of the host page
+ */
+
+/* Work along one host page at a time */
+for (current_hp = first_long; current_hp <= last_long;
+ current_hp += host_len) {
+bool discard = 0;
+bool redirty = 0;
+bool has_some_dirty = false;
+bool has_some_undirty = false;
+bool has_some_sent = false;
+bool has_some_unsent = false;
+
+/*
+ * Check each long of mask for this hp, and see if anything
+ * needs updating.
+ */
+for (cur_long = current_hp; cur_long < (current_hp + host_len);
+ cur_long++) {
+/* a chunk of sent pages */
+unsigned long sdata = ms->sentmap[cur_long];
+/* a chunk of dirty pages */
+unsigned long ddata = migration_bitmap[cur_long];
+
+if (sdata) {
+has_some_sent = true;
+}
+if (sdata != ~0ul) {
+has_some_unsent = true;
+}
+if (ddata) {
+has_some_dirty = true;
+}
+if (ddata != ~0ul) {
+has_some_undirty = true;
+}
+
+}
+
+if (has_some_sent && has_some_unsent) {
+/* Partially sent host page */
+discard = true;
+redirty = true;
+}
+
+if (has_some_dirty && has_some_undirty) {
+/* Partially dirty host page */
+redirty = true;
+}
+
+if (!discard && !redirty) {
+/* All consistent - next host page */
+continue;
+}
+
+
+/* Now walk the chunks again, sending discards etc */
+for (cur_long = current_hp; cur_long < (current_hp + host_len);
+ cur_long++) {
+unsigned long cur_bits = cur_long * long_bits;
+
+/* a chunk of sent pages */
+unsigned long sdata = ms->sentmap[cur_long];
+/* a chunk of dirty pages */
+unsigned long ddata = migration_bitmap[cur_long];
+
+if (discard && sdata) {
+/* Tell the destination to discard these pages */
+postcopy_discard_send_range(ms, pds, cur_bits,
+cur_bits + long_bits - 1);
+/* And clear them in the sent data structure */
+ms->sentmap[cur_long] = 0;
+  

[Qemu-devel] [PATCH v6 40/47] Don't sync dirty bitmaps in postcopy

2015-04-14 Thread Dr. David Alan Gilbert (git)
From: "Dr. David Alan Gilbert" 

Once we're in postcopy the source processors are stopped and memory
shouldn't change any more, so there's no need to look at the dirty
map.

There are two notes to this:
  1) If we do resync and a page had changed then the page would get
 sent again, which the destination wouldn't allow (since it might
 have also modified the page)
  2) Before disabling this I'd seen very rare cases where a page had been
 marked dirtied although the memory contents are apparently identical

Signed-off-by: Dr. David Alan Gilbert 
Reviewed-by: David Gibson 
---
 arch_init.c | 7 +--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/arch_init.c b/arch_init.c
index 0d3e865..dc672bf 100644
--- a/arch_init.c
+++ b/arch_init.c
@@ -1391,7 +1391,9 @@ static int ram_save_complete(QEMUFile *f, void *opaque)
 {
 rcu_read_lock();
 
-migration_bitmap_sync();
+if (!migration_postcopy_phase(migrate_get_current())) {
+migration_bitmap_sync();
+}
 
 ram_control_before_iterate(f, RAM_CONTROL_FINISH);
 
@@ -1425,7 +1427,8 @@ static void ram_save_pending(QEMUFile *f, void *opaque, 
uint64_t max_size,
 
 remaining_size = ram_save_remaining() * TARGET_PAGE_SIZE;
 
-if (remaining_size < max_size) {
+if (!migration_postcopy_phase(migrate_get_current()) &&
+remaining_size < max_size) {
 qemu_mutex_lock_iothread();
 rcu_read_lock();
 migration_bitmap_sync();
-- 
2.1.0




[Qemu-devel] [PATCH v6 43/47] Start up a postcopy/listener thread ready for incoming page data

2015-04-14 Thread Dr. David Alan Gilbert (git)
From: "Dr. David Alan Gilbert" 

The loading of a device state (during postcopy) may access guest
memory that's still on the source machine and thus might need
a page fill; split off a separate thread that handles the incoming
page data so that the original incoming migration code can finish
off the device data.

Signed-off-by: Dr. David Alan Gilbert 
---
 include/migration/migration.h |  4 +++
 migration/migration.c |  6 
 savevm.c  | 79 ++-
 trace-events  |  2 ++
 4 files changed, 90 insertions(+), 1 deletion(-)

diff --git a/include/migration/migration.h b/include/migration/migration.h
index 4d6f33a..cce4c50 100644
--- a/include/migration/migration.h
+++ b/include/migration/migration.h
@@ -79,6 +79,10 @@ struct MigrationIncomingState {
 QemuThread fault_thread;
 QemuSemaphore  fault_thread_sem;
 
+bool   have_listen_thread;
+QemuThread listen_thread;
+QemuSemaphore  listen_thread_sem;
+
 /* For the kernel to send us notifications */
 intuserfault_fd;
 /* To tell the fault_thread to quit */
diff --git a/migration/migration.c b/migration/migration.c
index 2509798..6537d23 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1082,6 +1082,12 @@ static int postcopy_start(MigrationState *ms, bool 
*old_vm_running)
 goto fail;
 }
 
+/*
+ * Make sure the receiver can get incoming pages before we send the rest
+ * of the state
+ */
+qemu_savevm_send_postcopy_listen(fb);
+
 qemu_savevm_state_complete_precopy(fb);
 qemu_savevm_send_ping(fb, 3);
 
diff --git a/savevm.c b/savevm.c
index f606ce8..ce8c3b5 100644
--- a/savevm.c
+++ b/savevm.c
@@ -1261,6 +1261,65 @@ static int 
loadvm_postcopy_ram_handle_discard(MigrationIncomingState *mis,
 return 0;
 }
 
+/*
+ * Triggered by a postcopy_listen command; this thread takes over reading
+ * the input stream, leaving the main thread free to carry on loading the rest
+ * of the device state (from RAM).
+ * (TODO:This could do with being in a postcopy file - but there again it's
+ * just another input loop, not that postcopy specific)
+ */
+static void *postcopy_ram_listen_thread(void *opaque)
+{
+QEMUFile *f = opaque;
+MigrationIncomingState *mis = migration_incoming_get_current();
+int load_res;
+
+qemu_sem_post(&mis->listen_thread_sem);
+trace_postcopy_ram_listen_thread_start();
+
+/*
+ * Because we're a thread and not a coroutine we can't yield
+ * in qemu_file, and thus we must be blocking now.
+ */
+qemu_file_change_blocking(f, true);
+load_res = qemu_loadvm_state_main(f, mis);
+/* And non-blocking again so we don't block in any cleanup */
+qemu_file_change_blocking(f, false);
+
+trace_postcopy_ram_listen_thread_exit();
+if (load_res < 0) {
+error_report("%s: loadvm failed: %d", __func__, load_res);
+qemu_file_set_error(f, load_res);
+} else {
+/*
+ * This looks good, but it's possible that the device loading in the
+ * main thread hasn't finished yet, and so we might not be in 'RUN'
+ * state yet; wait for the end of the main thread.
+ */
+qemu_event_wait(&mis->main_thread_load_event);
+}
+postcopy_ram_incoming_cleanup(mis);
+/*
+ * If everything has worked fine, then the main thread has waited
+ * for us to start, and we're the last use of the mis.
+ * (If something broke then qemu will have to exit anyway since it's
+ * got a bad migration state).
+ */
+migration_incoming_state_destroy();
+
+if (load_res < 0) {
+/*
+ * If something went wrong then we have a bad state so exit;
+ * depending how far we got it might be possible at this point
+ * to leave the guest running and fire MCEs for pages that never
+ * arrived as a desperate recovery step.
+ */
+exit(EXIT_FAILURE);
+}
+
+return NULL;
+}
+
 /* After this message we must be able to immediately receive postcopy data */
 static int loadvm_postcopy_handle_listen(MigrationIncomingState *mis)
 {
@@ -1280,7 +1339,20 @@ static int 
loadvm_postcopy_handle_listen(MigrationIncomingState *mis)
 return -1;
 }
 
-/* TODO start up the postcopy listening thread */
+if (mis->have_listen_thread) {
+error_report("CMD_POSTCOPY_RAM_LISTEN already has a listen thread");
+return -1;
+}
+
+mis->have_listen_thread = true;
+/* Start up the listening thread and wait for it to signal ready */
+qemu_sem_init(&mis->listen_thread_sem, 0);
+qemu_thread_create(&mis->listen_thread, "postcopy/listen",
+   postcopy_ram_listen_thread, mis->file,
+   QEMU_THREAD_JOINABLE);
+qemu_sem_wait(&mis->listen_thread_sem);
+qemu_sem_destroy(&mis->listen_thread_sem);
+
 return 0;
 }
 
@@ -1597,6 +1669,11 @@ int qemu_loadvm_st

[Qemu-devel] [PATCH v6 38/47] Postcopy: Use helpers to map pages during migration

2015-04-14 Thread Dr. David Alan Gilbert (git)
From: "Dr. David Alan Gilbert" 

In postcopy, the destination guest is running at the same time
as it's receiving pages; as we receive new pages we must put
them into the guests address space atomically to avoid a running
CPU accessing a partially written page.

Use the helpers in postcopy-ram.c to map these pages.

qemu_get_buffer_less_copy is used to avoid a copy out of qemu_file
in the case that postcopy is going to do a copy anyway.

Signed-off-by: Dr. David Alan Gilbert 
---
 arch_init.c | 117 +---
 1 file changed, 97 insertions(+), 20 deletions(-)

diff --git a/arch_init.c b/arch_init.c
index c96c4c1..0d3e865 100644
--- a/arch_init.c
+++ b/arch_init.c
@@ -1476,7 +1476,17 @@ static int load_xbzrle(QEMUFile *f, ram_addr_t addr, 
void *host)
 /* Must be called from within a rcu critical section.
  * Returns a pointer from within the RCU-protected ram_list.
  */
+/*
+ * Read a RAMBlock ID from the stream f, find the host address of the
+ * start of that block and add on 'offset'
+ *
+ * f: Stream to read from
+ * mis: MigrationIncomingState
+ * offset: Offset within the block
+ * flags: Page flags (mostly to see if it's a continuation of previous block)
+ */
 static inline void *host_from_stream_offset(QEMUFile *f,
+MigrationIncomingState *mis,
 ram_addr_t offset,
 int flags)
 {
@@ -1489,7 +1499,6 @@ static inline void *host_from_stream_offset(QEMUFile *f,
 error_report("Ack, bad migration stream!");
 return NULL;
 }
-
 return memory_region_get_ram_ptr(block->mr) + offset;
 }
 
@@ -1534,6 +1543,16 @@ static int ram_load(QEMUFile *f, void *opaque, int 
version_id)
 {
 int flags = 0, ret = 0;
 static uint64_t seq_iter;
+/*
+ * System is running in postcopy mode, page inserts to host memory must be
+ * atomic
+ */
+MigrationIncomingState *mis = migration_incoming_get_current();
+bool postcopy_running = postcopy_state_get(mis) >=
+POSTCOPY_INCOMING_LISTENING;
+void *postcopy_host_page = NULL;
+bool postcopy_place_needed = false;
+bool matching_page_sizes = qemu_host_page_size == TARGET_PAGE_SIZE;
 
 seq_iter++;
 
@@ -1549,13 +1568,57 @@ static int ram_load(QEMUFile *f, void *opaque, int 
version_id)
 rcu_read_lock();
 while (!ret && !(flags & RAM_SAVE_FLAG_EOS)) {
 ram_addr_t addr, total_ram_bytes;
-void *host;
+void *host = 0;
+void *page_buffer = 0;
+void *postcopy_place_source = 0;
 uint8_t ch;
+bool all_zero = false;
 
 addr = qemu_get_be64(f);
 flags = addr & ~TARGET_PAGE_MASK;
 addr &= TARGET_PAGE_MASK;
 
+if (flags & (RAM_SAVE_FLAG_COMPRESS | RAM_SAVE_FLAG_PAGE |
+ RAM_SAVE_FLAG_XBZRLE)) {
+host = host_from_stream_offset(f, mis, addr, flags);
+if (!host) {
+error_report("Illegal RAM offset " RAM_ADDR_FMT, addr);
+ret = -EINVAL;
+break;
+}
+if (!postcopy_running) {
+page_buffer = host;
+} else {
+/*
+ * Postcopy requires that we place whole host pages atomically.
+ * To make it atomic, the data is read into a temporary page
+ * that's moved into place later.
+ * The migration protocol uses,  possibly smaller, target-pages
+ * however the source ensures it always sends all the 
components
+ * of a host page in order.
+ */
+if (!postcopy_host_page) {
+postcopy_host_page = postcopy_get_tmp_page(mis);
+}
+page_buffer = postcopy_host_page +
+  ((uintptr_t)host & ~qemu_host_page_mask);
+/* If all TP are zero then we can optimise the place */
+if (!((uintptr_t)host & ~qemu_host_page_mask)) {
+all_zero = true;
+}
+
+/*
+ * If it's the last part of a host page then we place the host
+ * page
+ */
+postcopy_place_needed = (((uintptr_t)host + TARGET_PAGE_SIZE) &
+ ~qemu_host_page_mask) == 0;
+postcopy_place_source = postcopy_host_page;
+}
+} else {
+postcopy_place_needed = false;
+}
+
 switch (flags & ~RAM_SAVE_FLAG_CONTINUE) {
 case RAM_SAVE_FLAG_MEM_SIZE:
 /* Synchronize RAM block list */
@@ -1592,30 +1655,36 @@ static int ram_load(QEMUFile *f, void *opaque, int 
version_id)
 }
 break;
 case RAM_SAVE_FLAG_COMPRESS:
-host = host_from_stream_offs

[Qemu-devel] [PATCH v6 46/47] Disable mlock around incoming postcopy

2015-04-14 Thread Dr. David Alan Gilbert (git)
From: "Dr. David Alan Gilbert" 

Userfault doesn't work with mlock; mlock is designed to nail down pages
so they don't move, userfault is designed to tell you when they're not
there.

munlock the pages we userfault protect before postcopy.
mlock everything again at the end if mlock is enabled.

Signed-off-by: Dr. David Alan Gilbert 
Reviewed-by: David Gibson 
---
 include/sysemu/sysemu.h  |  1 +
 migration/postcopy-ram.c | 24 
 2 files changed, 25 insertions(+)

diff --git a/include/sysemu/sysemu.h b/include/sysemu/sysemu.h
index 248f0d6..d6fca99 100644
--- a/include/sysemu/sysemu.h
+++ b/include/sysemu/sysemu.h
@@ -171,6 +171,7 @@ extern int boot_menu;
 extern bool boot_strict;
 extern uint8_t *boot_splash_filedata;
 extern size_t boot_splash_filedata_size;
+extern bool enable_mlock;
 extern uint8_t qemu_extra_params_fw[2];
 extern QEMUClockType rtc_clock;
 extern const char *mem_path;
diff --git a/migration/postcopy-ram.c b/migration/postcopy-ram.c
index b2dc3b7..ddf0841 100644
--- a/migration/postcopy-ram.c
+++ b/migration/postcopy-ram.c
@@ -84,6 +84,11 @@ static bool ufd_version_check(int ufd)
 return true;
 }
 
+/*
+ * Note: This has the side effect of munlock'ing all of RAM, that's
+ * normally fine since if the postcopy succeeds it gets turned back on at the
+ * end.
+ */
 bool postcopy_ram_supported_by_host(void)
 {
 long pagesize = getpagesize();
@@ -112,6 +117,15 @@ bool postcopy_ram_supported_by_host(void)
 }
 
 /*
+ * userfault and mlock don't go together; we'll put it back later if
+ * it was enabled.
+ */
+if (munlockall()) {
+error_report("%s: munlockall: %s", __func__,  strerror(errno));
+return -1;
+}
+
+/*
  *  We need to check that the ops we need are supported on anon memory
  *  To do that we need to register a chunk and see the flags that
  *  are returned.
@@ -302,6 +316,16 @@ int postcopy_ram_incoming_cleanup(MigrationIncomingState 
*mis)
 mis->have_fault_thread = false;
 }
 
+if (enable_mlock) {
+if (os_mlock() < 0) {
+error_report("mlock: %s", strerror(errno));
+/*
+ * It doesn't feel right to fail at this point, we have a valid
+ * VM state.
+ */
+}
+}
+
 postcopy_state_set(mis, POSTCOPY_INCOMING_END);
 migrate_send_rp_shut(mis, qemu_file_get_error(mis->file) != 0);
 
-- 
2.1.0




[Qemu-devel] [PATCH v6 34/47] Page request: Add MIG_RP_MSG_REQ_PAGES reverse command

2015-04-14 Thread Dr. David Alan Gilbert (git)
From: "Dr. David Alan Gilbert" 

Add MIG_RP_MSG_REQ_PAGES command on Return path for the postcopy
destination to request a page from the source.

Signed-off-by: Dr. David Alan Gilbert 
---
 include/migration/migration.h |  4 +++
 migration/migration.c | 70 +++
 trace-events  |  1 +
 3 files changed, 75 insertions(+)

diff --git a/include/migration/migration.h b/include/migration/migration.h
index c02266e..37bd54a 100644
--- a/include/migration/migration.h
+++ b/include/migration/migration.h
@@ -46,6 +46,8 @@ enum mig_rp_message_type {
 MIG_RP_MSG_INVALID = 0,  /* Must be 0 */
 MIG_RP_MSG_SHUT, /* sibling will not send any more RP messages */
 MIG_RP_MSG_PONG, /* Response to a PING; data (seq: be32 ) */
+
+MIG_RP_MSG_REQ_PAGES,/* data (start: be64, len: be64) */
 };
 
 typedef QLIST_HEAD(, LoadStateEntry) LoadStateEntry_Head;
@@ -236,6 +238,8 @@ void migrate_send_rp_shut(MigrationIncomingState *mis,
   uint32_t value);
 void migrate_send_rp_pong(MigrationIncomingState *mis,
   uint32_t value);
+void migrate_send_rp_req_pages(MigrationIncomingState *mis, const char* rbname,
+  ram_addr_t start, ram_addr_t len);
 
 void ram_control_before_iterate(QEMUFile *f, uint64_t flags);
 void ram_control_after_iterate(QEMUFile *f, uint64_t flags);
diff --git a/migration/migration.c b/migration/migration.c
index cf26d0d..41f377c 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -99,6 +99,36 @@ static void deferred_incoming_migration(Error **errp)
 deferred_incoming = true;
 }
 
+/* Request a range of pages from the source VM at the given
+ * start address.
+ *   rbname: Name of the RAMBlock to request the page in, if NULL it's the same
+ *   as the last request (a name must have been given previously)
+ *   Start: Address offset within the RB
+ *   Len: Length in bytes required - must be a multiple of pagesize
+ */
+void migrate_send_rp_req_pages(MigrationIncomingState *mis, const char *rbname,
+   ram_addr_t start, ram_addr_t len)
+{
+uint8_t bufc[16+1+255]; /* start (8 byte), len (8 byte), rbname upto 256 */
+uint64_t *buf64 = (uint64_t *)bufc;
+size_t msglen = 16; /* start + len */
+
+assert(!(len & 1));
+if (rbname) {
+int rbname_len = strlen(rbname);
+assert(rbname_len < 256);
+
+len |= 1; /* Flag to say we've got a name */
+bufc[msglen++] = rbname_len;
+memcpy(bufc + msglen, rbname, rbname_len);
+msglen += rbname_len;
+}
+
+buf64[0] = cpu_to_be64((uint64_t)start);
+buf64[1] = cpu_to_be64((uint64_t)len);
+migrate_send_rp_message(mis, MIG_RP_MSG_REQ_PAGES, msglen, bufc);
+}
+
 void qemu_start_incoming_migration(const char *uri, Error **errp)
 {
 const char *p;
@@ -804,6 +834,17 @@ static void source_return_path_bad(MigrationState *s)
 }
 
 /*
+ * Process a request for pages received on the return path,
+ * We're allowed to send more than requested (e.g. to round to our page size)
+ * and we don't need to send pages that have already been sent.
+ */
+static void migrate_handle_rp_req_pages(MigrationState *ms, const char* rbname,
+   ram_addr_t start, ram_addr_t len)
+{
+trace_migrate_handle_rp_req_pages(rbname, start, len);
+}
+
+/*
  * Handles messages sent on the return path towards the source VM
  *
  */
@@ -815,6 +856,8 @@ static void *source_return_path_thread(void *opaque)
 const int max_len = 512;
 uint8_t buf[max_len];
 uint32_t tmp32;
+ram_addr_t start, len;
+char *tmpstr;
 int res;
 
 trace_source_return_path_thread_entry();
@@ -830,6 +873,11 @@ static void *source_return_path_thread(void *opaque)
 expected_len = 4;
 break;
 
+case MIG_RP_MSG_REQ_PAGES:
+/* 16 byte start/len _possibly_ plus an id str */
+expected_len = 16 + 256;
+break;
+
 default:
 error_report("RP: Received invalid message 0x%04x length 0x%04x",
 header_type, header_len);
@@ -875,6 +923,28 @@ static void *source_return_path_thread(void *opaque)
 trace_source_return_path_thread_pong(tmp32);
 break;
 
+case MIG_RP_MSG_REQ_PAGES:
+start = be64_to_cpup((uint64_t *)buf);
+len = be64_to_cpup(((uint64_t *)buf)+1);
+tmpstr = NULL;
+if (len & 1) {
+len -= 1; /* Remove the flag */
+/* Now we expect an idstr */
+tmp32 = buf[16]; /* Length of the following idstr */
+tmpstr = (char *)&buf[17];
+buf[17+tmp32] = '\0';
+expected_len = 16+1+tmp32;
+} else {
+expected_len = 16;
+}
+if (header_len != expected_len) {
+error_report("R

[Qemu-devel] [PATCH v6 37/47] postcopy_ram.c: place_page and helpers

2015-04-14 Thread Dr. David Alan Gilbert (git)
From: "Dr. David Alan Gilbert" 

postcopy_place_page (etc) provide a way for postcopy to place a page
into guests memory atomically (using the copy ioctl on the ufd).

Signed-off-by: Dr. David Alan Gilbert 
---
 include/migration/migration.h|  1 +
 include/migration/postcopy-ram.h | 16 
 migration/postcopy-ram.c | 87 
 trace-events |  1 +
 4 files changed, 105 insertions(+)

diff --git a/include/migration/migration.h b/include/migration/migration.h
index 75c3299..db06fd2 100644
--- a/include/migration/migration.h
+++ b/include/migration/migration.h
@@ -82,6 +82,7 @@ struct MigrationIncomingState {
 intuserfault_fd;
 QEMUFile *return_path;
 QemuMutex  rp_mutex;/* We send replies from multiple threads */
+void  *postcopy_tmp_page;
 };
 
 MigrationIncomingState *migration_incoming_get_current(void);
diff --git a/include/migration/postcopy-ram.h b/include/migration/postcopy-ram.h
index 88793b3..5d8ec61 100644
--- a/include/migration/postcopy-ram.h
+++ b/include/migration/postcopy-ram.h
@@ -69,4 +69,20 @@ void postcopy_discard_send_range(MigrationState *ms, 
PostcopyDiscardState *pds,
 void postcopy_discard_send_finish(MigrationState *ms,
   PostcopyDiscardState *pds);
 
+/*
+ * Place a page (from) at (host) efficiently
+ *There are restrictions on how 'from' must be mapped, in general best
+ *to use other postcopy_ routines to allocate.
+ * returns 0 on success
+ */
+int postcopy_place_page(MigrationIncomingState *mis, void *host, void *from,
+bool all_zero);
+
+/*
+ * Allocate a page of memory that can be mapped at a later point in time
+ * using postcopy_place_page
+ * Returns: Pointer to allocated page
+ */
+void *postcopy_get_tmp_page(MigrationIncomingState *mis);
+
 #endif
diff --git a/migration/postcopy-ram.c b/migration/postcopy-ram.c
index 1be3bc9..33aadbc 100644
--- a/migration/postcopy-ram.c
+++ b/migration/postcopy-ram.c
@@ -278,6 +278,10 @@ int postcopy_ram_incoming_cleanup(MigrationIncomingState 
*mis)
 return -1;
 }
 
+if (mis->postcopy_tmp_page) {
+munmap(mis->postcopy_tmp_page, getpagesize());
+mis->postcopy_tmp_page = NULL;
+}
 return 0;
 }
 
@@ -344,6 +348,77 @@ int postcopy_ram_enable_notify(MigrationIncomingState *mis)
 return 0;
 }
 
+/*
+ * Place a host page (from) at (host) atomically
+ * all_zero: Hint that the page being placed is 0 throughout
+ * returns 0 on success
+ */
+int postcopy_place_page(MigrationIncomingState *mis, void *host, void *from,
+bool all_zero)
+{
+if (!all_zero) {
+struct uffdio_copy copy_struct;
+
+copy_struct.dst = (uint64_t)(uintptr_t)host;
+copy_struct.src = (uint64_t)(uintptr_t)from;
+copy_struct.len = getpagesize();
+copy_struct.mode = 0;
+
+/* copy also acks to the kernel waking the stalled thread up
+ * TODO: We can inhibit that ack and only do it if it was requested
+ * which would be slightly cheaper, but we'd have to be careful
+ * of the order of updating our page state.
+ */
+if (ioctl(mis->userfault_fd, UFFDIO_COPY, ©_struct)) {
+int e = errno;
+error_report("%s: %s copy host: %p from: %p",
+ __func__, strerror(e), host, from);
+
+return -e;
+}
+} else {
+struct uffdio_zeropage zero_struct;
+
+zero_struct.range.start = (uint64_t)(uintptr_t)host;
+zero_struct.range.len = getpagesize();
+zero_struct.mode = 0;
+
+if (ioctl(mis->userfault_fd, UFFDIO_ZEROPAGE, &zero_struct)) {
+int e = errno;
+error_report("%s: %s zero host: %p from: %p",
+ __func__, strerror(e), host, from);
+
+return -e;
+}
+}
+
+trace_postcopy_place_page(host, all_zero);
+return 0;
+}
+
+/*
+ * Returns a target page of memory that can be mapped at a later point in time
+ * using postcopy_place_page
+ * The same address is used repeatedly, postcopy_place_page just takes the
+ * backing page away.
+ * Returns: Pointer to allocated page
+ *
+ */
+void *postcopy_get_tmp_page(MigrationIncomingState *mis)
+{
+if (!mis->postcopy_tmp_page) {
+mis->postcopy_tmp_page = mmap(NULL, getpagesize(),
+ PROT_READ | PROT_WRITE, MAP_PRIVATE |
+ MAP_ANONYMOUS, -1, 0);
+if (!mis->postcopy_tmp_page) {
+error_report("%s: %s", __func__, strerror(errno));
+return NULL;
+}
+}
+
+return mis->postcopy_tmp_page;
+}
+
 #else
 /* No target OS support, stubs just fail */
 bool postcopy_ram_supported_by_host(void)
@@ -373,6 +448,18 @@ int postcopy_ram_enable_notify(MigrationIncomingState *mis)
 {
 assert(0);
 }
+
+int postcopy_place_page(MigrationIncomingState *mis, void

[Qemu-devel] [PATCH v6 35/47] Page request: Process incoming page request

2015-04-14 Thread Dr. David Alan Gilbert (git)
From: "Dr. David Alan Gilbert" 

On receiving MIG_RPCOMM_REQ_PAGES look up the address and
queue the page.

Signed-off-by: Dr. David Alan Gilbert 
---
 arch_init.c   | 64 ++-
 include/exec/cpu-all.h|  2 --
 include/migration/migration.h | 21 ++
 include/qemu/typedefs.h   |  1 +
 migration/migration.c | 31 +
 trace-events  |  1 +
 6 files changed, 117 insertions(+), 3 deletions(-)

diff --git a/arch_init.c b/arch_init.c
index 2c937d1..48403f3 100644
--- a/arch_init.c
+++ b/arch_init.c
@@ -715,7 +715,69 @@ static int ram_save_page(QEMUFile *f, RAMBlock* block, 
ram_addr_t offset,
 return pages;
 }
 
-/**
+/*
+ * Queue the pages for transmission, e.g. a request from postcopy destination
+ *   ms: MigrationStatus in which the queue is held
+ *   rbname: The RAMBlock the request is for - may be NULL (to mean reuse last)
+ *   start: Offset from the start of the RAMBlock
+ *   len: Length (in bytes) to send
+ *   Return: 0 on success
+ */
+int ram_save_queue_pages(MigrationState *ms, const char *rbname,
+ ram_addr_t start, ram_addr_t len)
+{
+RAMBlock *ramblock;
+
+rcu_read_lock();
+if (!rbname) {
+/* Reuse last RAMBlock */
+ramblock = ms->last_req_rb;
+
+if (!ramblock) {
+/*
+ * Shouldn't happen, we can't reuse the last RAMBlock if
+ * it's the 1st request.
+ */
+error_report("ram_save_queue_pages no previous block");
+goto err;
+}
+} else {
+ramblock = ram_find_block(rbname);
+
+if (!ramblock) {
+/* We shouldn't be asked for a non-existent RAMBlock */
+error_report("ram_save_queue_pages no block '%s'", rbname);
+goto err;
+}
+}
+trace_ram_save_queue_pages(ramblock->idstr, start, len);
+if (start+len > ramblock->used_length) {
+error_report("%s request overrun start=%zx len=%zx blocklen=%zx",
+ __func__, start, len, ramblock->used_length);
+goto err;
+}
+
+struct MigrationSrcPageRequest *new_entry =
+g_malloc0(sizeof(struct MigrationSrcPageRequest));
+new_entry->rb = ramblock;
+new_entry->offset = start;
+new_entry->len = len;
+ms->last_req_rb = ramblock;
+
+qemu_mutex_lock(&ms->src_page_req_mutex);
+memory_region_ref(ramblock->mr);
+QSIMPLEQ_INSERT_TAIL(&ms->src_page_requests, new_entry, next_req);
+qemu_mutex_unlock(&ms->src_page_req_mutex);
+rcu_read_unlock();
+
+return 0;
+
+err:
+rcu_read_unlock();
+return -1;
+}
+
+/*
  * ram_find_and_save_block: Finds a dirty page and sends it to f
  *
  * Called within an RCU critical section.
diff --git a/include/exec/cpu-all.h b/include/exec/cpu-all.h
index ac06c67..1f336e6 100644
--- a/include/exec/cpu-all.h
+++ b/include/exec/cpu-all.h
@@ -266,8 +266,6 @@ CPUArchState *cpu_copy(CPUArchState *env);
 
 /* memory API */
 
-typedef struct RAMBlock RAMBlock;
-
 struct RAMBlock {
 struct rcu_head rcu;
 struct MemoryRegion *mr;
diff --git a/include/migration/migration.h b/include/migration/migration.h
index 37bd54a..75c3299 100644
--- a/include/migration/migration.h
+++ b/include/migration/migration.h
@@ -88,6 +88,18 @@ MigrationIncomingState *migration_incoming_get_current(void);
 MigrationIncomingState *migration_incoming_state_new(QEMUFile *f);
 void migration_incoming_state_destroy(void);
 
+/*
+ * An outstanding page request, on the source, having been received
+ * and queued
+ */
+struct MigrationSrcPageRequest {
+RAMBlock *rb;
+hwaddroffset;
+hwaddrlen;
+
+QSIMPLEQ_ENTRY(MigrationSrcPageRequest) next_req;
+};
+
 struct MigrationState
 {
 int64_t bandwidth_limit;
@@ -130,6 +142,12 @@ struct MigrationState
  * of the postcopy phase
  */
 unsigned long *sentmap;
+
+/* Queue of outstanding page requests from the destination */
+QemuMutex src_page_req_mutex;
+QSIMPLEQ_HEAD(src_page_requests, MigrationSrcPageRequest) 
src_page_requests;
+/* The RAMBlock used in the last src_page_request */
+RAMBlock *last_req_rb;
 };
 
 void process_incoming_migration(QEMUFile *f);
@@ -259,6 +277,9 @@ size_t ram_control_save_page(QEMUFile *f, ram_addr_t 
block_offset,
  ram_addr_t offset, size_t size,
  uint64_t *bytes_sent);
 
+int ram_save_queue_pages(MigrationState *ms, const char *rbname,
+ ram_addr_t start, ram_addr_t len);
+
 PostcopyState postcopy_state_get(MigrationIncomingState *mis);
 
 /* Set the state and return the old state */
diff --git a/include/qemu/typedefs.h b/include/qemu/typedefs.h
index 5f130fe..61b5b46 100644
--- a/include/qemu/typedefs.h
+++ b/include/qemu/typedefs.h
@@ -72,6 +72,7 @@ typedef struct QEMUSGList QEMUSGList;
 typedef struct QEMUSizedBuffer QEMUSizedBuffer;
 typedef struc

[Qemu-devel] [PATCH v6 31/47] postcopy: ram_enable_notify to switch on userfault

2015-04-14 Thread Dr. David Alan Gilbert (git)
From: "Dr. David Alan Gilbert" 

Mark the area of RAM as 'userfault'
Start up a fault-thread to handle any userfaults we might receive
from it (to be filled in later)

Signed-off-by: Dr. David Alan Gilbert 
Reviewed-by: David Gibson 
---
 include/migration/migration.h|  3 ++
 include/migration/postcopy-ram.h |  6 
 migration/postcopy-ram.c | 69 +++-
 savevm.c |  9 ++
 4 files changed, 86 insertions(+), 1 deletion(-)

diff --git a/include/migration/migration.h b/include/migration/migration.h
index 8c8afc4..36451de 100644
--- a/include/migration/migration.h
+++ b/include/migration/migration.h
@@ -73,6 +73,9 @@ struct MigrationIncomingState {
  */
 QemuEvent  main_thread_load_event;
 
+QemuThread fault_thread;
+QemuSemaphore  fault_thread_sem;
+
 /* For the kernel to send us notifications */
 intuserfault_fd;
 QEMUFile *return_path;
diff --git a/include/migration/postcopy-ram.h b/include/migration/postcopy-ram.h
index b46af08..88793b3 100644
--- a/include/migration/postcopy-ram.h
+++ b/include/migration/postcopy-ram.h
@@ -17,6 +17,12 @@
 bool postcopy_ram_supported_by_host(void);
 
 /*
+ * Make all of RAM sensitive to accesses to areas that haven't yet been written
+ * and wire up anything necessary to deal with it.
+ */
+int postcopy_ram_enable_notify(MigrationIncomingState *mis);
+
+/*
  * Initialise postcopy-ram, setting the RAM to a state where we can go into
  * postcopy later; must be called prior to any precopy.
  * called from arch_init's similarly named ram_postcopy_incoming_init
diff --git a/migration/postcopy-ram.c b/migration/postcopy-ram.c
index 16b78c2..1be3bc9 100644
--- a/migration/postcopy-ram.c
+++ b/migration/postcopy-ram.c
@@ -281,9 +281,71 @@ int postcopy_ram_incoming_cleanup(MigrationIncomingState 
*mis)
 return 0;
 }
 
+/*
+ * Mark the given area of RAM as requiring notification to unwritten areas
+ * Used as a  callback on qemu_ram_foreach_block.
+ *   host_addr: Base of area to mark
+ *   offset: Offset in the whole ram arena
+ *   length: Length of the section
+ *   opaque: MigrationIncomingState pointer
+ * Returns 0 on success
+ */
+static int ram_block_enable_notify(const char *block_name, void *host_addr,
+   ram_addr_t offset, ram_addr_t length,
+   void *opaque)
+{
+MigrationIncomingState *mis = opaque;
+struct uffdio_register reg_struct;
+
+reg_struct.range.start = (uintptr_t)host_addr;
+reg_struct.range.len = length;
+reg_struct.mode = UFFDIO_REGISTER_MODE_MISSING;
+
+/* Now tell our userfault_fd that it's responsible for this area */
+if (ioctl(mis->userfault_fd, UFFDIO_REGISTER, ®_struct)) {
+error_report("%s userfault register: %s", __func__, strerror(errno));
+return -1;
+}
+
+return 0;
+}
+
+/*
+ * Handle faults detected by the USERFAULT markings
+ */
+static void *postcopy_ram_fault_thread(void *opaque)
+{
+MigrationIncomingState *mis = (MigrationIncomingState *)opaque;
+
+fprintf(stderr, "postcopy_ram_fault_thread\n");
+/* TODO: In later patch */
+qemu_sem_post(&mis->fault_thread_sem);
+while (1) {
+/* TODO: In later patch */
+}
+
+return NULL;
+}
+
+int postcopy_ram_enable_notify(MigrationIncomingState *mis)
+{
+/* Create the fault handler thread and wait for it to be ready */
+qemu_sem_init(&mis->fault_thread_sem, 0);
+qemu_thread_create(&mis->fault_thread, "postcopy/fault",
+   postcopy_ram_fault_thread, mis, QEMU_THREAD_JOINABLE);
+qemu_sem_wait(&mis->fault_thread_sem);
+qemu_sem_destroy(&mis->fault_thread_sem);
+
+/* Mark so that we get notified of accesses to unwritten areas */
+if (qemu_ram_foreach_block(ram_block_enable_notify, mis)) {
+return -1;
+}
+
+return 0;
+}
+
 #else
 /* No target OS support, stubs just fail */
-
 bool postcopy_ram_supported_by_host(void)
 {
 error_report("%s: No OS support", __func__);
@@ -306,6 +368,11 @@ int postcopy_ram_discard_range(MigrationIncomingState 
*mis, uint8_t *start,
 {
 assert(0);
 }
+
+int postcopy_ram_enable_notify(MigrationIncomingState *mis)
+{
+assert(0);
+}
 #endif
 
 /* - */
diff --git a/savevm.c b/savevm.c
index c383ce0..f606ce8 100644
--- a/savevm.c
+++ b/savevm.c
@@ -1271,6 +1271,15 @@ static int 
loadvm_postcopy_handle_listen(MigrationIncomingState *mis)
 return -1;
 }
 
+/*
+ * Sensitise RAM - can now generate requests for blocks that don't exist
+ * However, at this point the CPU shouldn't be running, and the IO
+ * shouldn't be doing anything yet so don't actually expect requests
+ */
+if (postcopy_ram_enable_notify(mis)) {
+return -1;
+}
+
 /* TODO start up the postcopy listening thread */
 return 0;
 }
-- 
2.1.0




[Qemu-devel] [PATCH v6 29/47] Postcopy: Maintain sentmap and calculate discard

2015-04-14 Thread Dr. David Alan Gilbert (git)
From: "Dr. David Alan Gilbert" 

Where postcopy is preceeded by a period of precopy, the destination will
have received pages that may have been dirtied on the source after the
page was sent.  The destination must throw these pages away before
starting it's CPUs.

Maintain a 'sentmap' of pages that have already been sent.
Calculate list of sent & dirty pages
Provide helpers on the destination side to discard these.

Signed-off-by: Dr. David Alan Gilbert 
---
 arch_init.c  | 202 ++-
 include/migration/migration.h|  12 +++
 include/migration/postcopy-ram.h |  35 +++
 include/qemu/typedefs.h  |   1 +
 migration/migration.c|   1 +
 migration/postcopy-ram.c | 108 +
 savevm.c |   2 -
 trace-events |   5 +
 8 files changed, 362 insertions(+), 4 deletions(-)

diff --git a/arch_init.c b/arch_init.c
index 0a49ace..efc2938 100644
--- a/arch_init.c
+++ b/arch_init.c
@@ -40,6 +40,7 @@
 #include "hw/audio/audio.h"
 #include "sysemu/kvm.h"
 #include "migration/migration.h"
+#include "migration/postcopy-ram.h"
 #include "hw/i386/smbios.h"
 #include "exec/address-spaces.h"
 #include "hw/audio/pcspk.h"
@@ -443,9 +444,17 @@ static int save_xbzrle_page(QEMUFile *f, uint8_t 
**current_data,
 return 1;
 }
 
+/* mr: The region to search for dirty pages in
+ * start: Start address (typically so we can continue from previous page)
+ * ram_addr_abs: Pointer into which to store the address of the dirty page
+ *   within the global ram_addr space
+ *
+ * Returns: byte offset within memory region of the start of a dirty page
+ */
 static inline
 ram_addr_t migration_bitmap_find_and_reset_dirty(MemoryRegion *mr,
- ram_addr_t start)
+ ram_addr_t start,
+ ram_addr_t *ram_addr_abs)
 {
 unsigned long base = mr->ram_addr >> TARGET_PAGE_BITS;
 unsigned long nr = base + (start >> TARGET_PAGE_BITS);
@@ -464,6 +473,7 @@ ram_addr_t 
migration_bitmap_find_and_reset_dirty(MemoryRegion *mr,
 clear_bit(next, migration_bitmap);
 migration_dirty_pages--;
 }
+*ram_addr_abs = next << TARGET_PAGE_BITS;
 return (next - base) << TARGET_PAGE_BITS;
 }
 
@@ -603,6 +613,19 @@ static void migration_bitmap_sync(void)
 }
 }
 
+static RAMBlock *ram_find_block(const char *id)
+{
+RAMBlock *block;
+
+QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
+if (!strcmp(id, block->idstr)) {
+return block;
+}
+}
+
+return NULL;
+}
+
 /**
  * ram_save_page: Send the given page to the stream
  *
@@ -713,13 +736,16 @@ static int ram_find_and_save_block(QEMUFile *f, bool 
last_stage,
 bool complete_round = false;
 int pages = 0;
 MemoryRegion *mr;
+ram_addr_t dirty_ram_abs; /* Address of the start of the dirty page in
+ ram_addr_t space */
 
 if (!block)
 block = QLIST_FIRST_RCU(&ram_list.blocks);
 
 while (true) {
 mr = block->mr;
-offset = migration_bitmap_find_and_reset_dirty(mr, offset);
+offset = migration_bitmap_find_and_reset_dirty(mr, offset,
+   &dirty_ram_abs);
 if (complete_round && block == last_seen_block &&
 offset >= last_offset) {
 break;
@@ -738,6 +764,11 @@ static int ram_find_and_save_block(QEMUFile *f, bool 
last_stage,
 
 /* if page is unmodified, continue to the next */
 if (pages > 0) {
+MigrationState *ms = migrate_get_current();
+if (ms->sentmap) {
+set_bit(dirty_ram_abs >> TARGET_PAGE_BITS, ms->sentmap);
+}
+
 last_sent_block = block;
 break;
 }
@@ -799,12 +830,19 @@ void free_xbzrle_decoded_buf(void)
 
 static void migration_end(void)
 {
+MigrationState *s = migrate_get_current();
+
 if (migration_bitmap) {
 memory_global_dirty_log_stop();
 g_free(migration_bitmap);
 migration_bitmap = NULL;
 }
 
+if (s->sentmap) {
+g_free(s->sentmap);
+s->sentmap = NULL;
+}
+
 XBZRLE_cache_lock();
 if (XBZRLE.cache) {
 cache_fini(XBZRLE.cache);
@@ -878,6 +916,160 @@ void ram_debug_dump_bitmap(unsigned long *todump, bool 
expected)
 }
 }
 
+/*  functions for postcopy * */
+
+/*
+ * Callback from postcopy_each_ram_send_discard for each RAMBlock
+ * start,end: Indexes into the bitmap for the first and last bit
+ *representing the named block
+ */
+static int postcopy_send_discard_bm_ram(MigrationState *ms,
+PostcopyDiscardState *pds,
+unsigned long start, unsigned long end)
+{
+unsigned long cur

[Qemu-devel] [PATCH v6 39/47] qemu_ram_block_from_host

2015-04-14 Thread Dr. David Alan Gilbert (git)
From: "Dr. David Alan Gilbert" 

Postcopy sends RAMBlock names and offsets over the wire (since it can't
rely on the order of ramaddr being the same), and it starts out with
HVA fault addresses from the kernel.

qemu_ram_block_from_host translates a HVA into a RAMBlock, an offset
in the RAMBlock and the global ram_addr_t value.

Rewrite qemu_ram_addr_from_host to use qemu_ram_block_from_host.

Provide qemu_ram_get_idstr since its the actual name text sent on the
wire.

Signed-off-by: Dr. David Alan Gilbert 
Reviewed-by: David Gibson 
---
 exec.c| 54 +++
 include/exec/cpu-common.h |  3 +++
 2 files changed, 48 insertions(+), 9 deletions(-)

diff --git a/exec.c b/exec.c
index c3027cf..86f2b87 100644
--- a/exec.c
+++ b/exec.c
@@ -1280,6 +1280,11 @@ static RAMBlock *find_ram_block(ram_addr_t addr)
 return NULL;
 }
 
+const char *qemu_ram_get_idstr(RAMBlock *rb)
+{
+return rb->idstr;
+}
+
 /* Called with iothread lock held.  */
 void qemu_ram_set_idstr(ram_addr_t addr, const char *name, DeviceState *dev)
 {
@@ -1768,8 +1773,16 @@ static void *qemu_ram_ptr_length(ram_addr_t addr, hwaddr 
*size)
 }
 }
 
-/* Some of the softmmu routines need to translate from a host pointer
- * (typically a TLB entry) back to a ram offset.
+/*
+ * Translates a host ptr back to a RAMBlock, a ram_addr and an offset
+ * in that RAMBlock.
+ *
+ * ptr: Host pointer to look up
+ * round_offset: If true round the result offset down to a page boundary
+ * *ram_addr: set to result ram_addr
+ * *offset: set to result offset within the RAMBlock
+ *
+ * Returns: RAMBlock (or NULL if not found)
  *
  * By the time this function returns, the returned pointer is not protected
  * by RCU anymore.  If the caller is not within an RCU critical section and
@@ -1777,18 +1790,22 @@ static void *qemu_ram_ptr_length(ram_addr_t addr, 
hwaddr *size)
  * pointer, such as a reference to the region that includes the incoming
  * ram_addr_t.
  */
-MemoryRegion *qemu_ram_addr_from_host(void *ptr, ram_addr_t *ram_addr)
+RAMBlock *qemu_ram_block_from_host(void *ptr, bool round_offset,
+   ram_addr_t *ram_addr,
+   ram_addr_t *offset)
 {
 RAMBlock *block;
 uint8_t *host = ptr;
-MemoryRegion *mr;
 
 if (xen_enabled()) {
 rcu_read_lock();
 *ram_addr = xen_ram_addr_from_mapcache(ptr);
-mr = qemu_get_ram_block(*ram_addr)->mr;
+block = qemu_get_ram_block(*ram_addr);
+if (block) {
+*offset = (host - block->host);
+}
 rcu_read_unlock();
-return mr;
+return block;
 }
 
 rcu_read_lock();
@@ -1811,10 +1828,29 @@ MemoryRegion *qemu_ram_addr_from_host(void *ptr, 
ram_addr_t *ram_addr)
 return NULL;
 
 found:
-*ram_addr = block->offset + (host - block->host);
-mr = block->mr;
+*offset = (host - block->host);
+if (round_offset) {
+*offset &= TARGET_PAGE_MASK;
+}
+*ram_addr = block->offset + *offset;
 rcu_read_unlock();
-return mr;
+return block;
+}
+
+/* Some of the softmmu routines need to translate from a host pointer
+   (typically a TLB entry) back to a ram offset.  */
+MemoryRegion *qemu_ram_addr_from_host(void *ptr, ram_addr_t *ram_addr)
+{
+RAMBlock *block;
+ram_addr_t offset; /* Not used */
+
+block = qemu_ram_block_from_host(ptr, false, ram_addr, &offset);
+
+if (!block) {
+return NULL;
+}
+
+return block->mr;
 }
 
 static void notdirty_mem_write(void *opaque, hwaddr ram_addr,
diff --git a/include/exec/cpu-common.h b/include/exec/cpu-common.h
index 2abecac..13f8d3a 100644
--- a/include/exec/cpu-common.h
+++ b/include/exec/cpu-common.h
@@ -62,8 +62,11 @@ typedef uint32_t CPUReadMemoryFunc(void *opaque, hwaddr 
addr);
 void qemu_ram_remap(ram_addr_t addr, ram_addr_t length);
 /* This should not be used by devices.  */
 MemoryRegion *qemu_ram_addr_from_host(void *ptr, ram_addr_t *ram_addr);
+RAMBlock *qemu_ram_block_from_host(void *ptr, bool round_offset,
+   ram_addr_t *ram_addr, ram_addr_t *offset);
 void qemu_ram_set_idstr(ram_addr_t addr, const char *name, DeviceState *dev);
 void qemu_ram_unset_idstr(ram_addr_t addr);
+const char *qemu_ram_get_idstr(RAMBlock *rb);
 
 void cpu_physical_memory_rw(hwaddr addr, uint8_t *buf,
 int len, int is_write);
-- 
2.1.0




[Qemu-devel] [PATCH v6 36/47] Page request: Consume pages off the post-copy queue

2015-04-14 Thread Dr. David Alan Gilbert (git)
From: "Dr. David Alan Gilbert" 

When transmitting RAM pages, consume pages that have been queued by
MIG_RPCOMM_REQPAGE commands and send them ahead of normal page scanning.

Note:
  a) After a queued page the linear walk carries on from after the
unqueued page; there is a reasonable chance that the destination
was about to ask for other closeby pages anyway.

  b) We have to be careful of any assumptions that the page walking
code makes, in particular it does some short cuts on its first linear
walk that break as soon as we do a queued page.

Signed-off-by: Dr. David Alan Gilbert 
---
 arch_init.c  | 156 +--
 trace-events |   2 +
 2 files changed, 132 insertions(+), 26 deletions(-)

diff --git a/arch_init.c b/arch_init.c
index 48403f3..c96c4c1 100644
--- a/arch_init.c
+++ b/arch_init.c
@@ -312,6 +312,7 @@ static RAMBlock *last_seen_block;
 /* This is the last block from where we have sent data */
 static RAMBlock *last_sent_block;
 static ram_addr_t last_offset;
+static bool last_was_from_queue;
 static unsigned long *migration_bitmap;
 static uint64_t migration_dirty_pages;
 static uint32_t last_version;
@@ -490,6 +491,19 @@ static inline bool migration_bitmap_set_dirty(ram_addr_t 
addr)
 return ret;
 }
 
+static inline bool migration_bitmap_clear_dirty(ram_addr_t addr)
+{
+bool ret;
+int nr = addr >> TARGET_PAGE_BITS;
+
+ret = test_and_clear_bit(nr, migration_bitmap);
+
+if (ret) {
+migration_dirty_pages--;
+}
+return ret;
+}
+
 static void migration_bitmap_sync_range(ram_addr_t start, ram_addr_t length)
 {
 ram_addr_t addr;
@@ -716,6 +730,40 @@ static int ram_save_page(QEMUFile *f, RAMBlock* block, 
ram_addr_t offset,
 }
 
 /*
+ * Unqueue a page from the queue fed by postcopy page requests
+ *
+ * Returns:  The RAMBlock* to transmit from (or NULL if the queue is empty)
+ *  ms:  MigrationState in
+ *  offset:  the byte offset within the RAMBlock for the start of the page
+ * ram_addr_abs: global offset in the dirty/sent bitmaps
+ */
+static RAMBlock *ram_save_unqueue_page(MigrationState *ms, ram_addr_t *offset,
+   ram_addr_t *ram_addr_abs)
+{
+RAMBlock *result = NULL;
+qemu_mutex_lock(&ms->src_page_req_mutex);
+if (!QSIMPLEQ_EMPTY(&ms->src_page_requests)) {
+struct MigrationSrcPageRequest *entry =
+QSIMPLEQ_FIRST(&ms->src_page_requests);
+result = entry->rb;
+*offset = entry->offset;
+*ram_addr_abs = (entry->offset + entry->rb->offset) & TARGET_PAGE_MASK;
+
+if (entry->len > TARGET_PAGE_SIZE) {
+entry->len -= TARGET_PAGE_SIZE;
+entry->offset += TARGET_PAGE_SIZE;
+} else {
+memory_region_unref(result->mr);
+QSIMPLEQ_REMOVE_HEAD(&ms->src_page_requests, next_req);
+g_free(entry);
+}
+}
+qemu_mutex_unlock(&ms->src_page_req_mutex);
+
+return result;
+}
+
+/*
  * Queue the pages for transmission, e.g. a request from postcopy destination
  *   ms: MigrationStatus in which the queue is held
  *   rbname: The RAMBlock the request is for - may be NULL (to mean reuse last)
@@ -793,47 +841,102 @@ err:
 static int ram_find_and_save_block(QEMUFile *f, bool last_stage,
uint64_t *bytes_transferred)
 {
+MigrationState *ms = migrate_get_current();
 RAMBlock *block = last_seen_block;
+RAMBlock *tmpblock;
 ram_addr_t offset = last_offset;
+ram_addr_t tmpoffset;
 bool complete_round = false;
 int pages = 0;
-MemoryRegion *mr;
 ram_addr_t dirty_ram_abs; /* Address of the start of the dirty page in
  ram_addr_t space */
+unsigned long hps = sysconf(_SC_PAGESIZE);
 
-if (!block)
+if (!block) {
 block = QLIST_FIRST_RCU(&ram_list.blocks);
+last_was_from_queue = false;
+}
 
-while (true) {
-mr = block->mr;
-offset = migration_bitmap_find_and_reset_dirty(mr, offset,
-   &dirty_ram_abs);
-if (complete_round && block == last_seen_block &&
-offset >= last_offset) {
-break;
+while (true) { /* Until we send a block or run out of stuff to send */
+tmpblock = NULL;
+
+/*
+ * Don't break host-page chunks up with queue items
+ * so only unqueue if,
+ *   a) The last item came from the queue anyway
+ *   b) The last sent item was the last target-page in a host page
+ */
+if (last_was_from_queue || !last_sent_block ||
+((last_offset & (hps - 1)) == (hps - TARGET_PAGE_SIZE))) {
+tmpblock = ram_save_unqueue_page(ms, &tmpoffset, &dirty_ram_abs);
 }
-if (offset >= block->used_length) {
-offset = 0;
-block = QLIST_NEXT_RCU(block, next);
-if (!block) 

[Qemu-devel] [PATCH v6 33/47] Postcopy end in migration_thread

2015-04-14 Thread Dr. David Alan Gilbert (git)
From: "Dr. David Alan Gilbert" 

The end of migration in postcopy is a bit different since some of
the things normally done at the end of migration have already been
done on the transition to postcopy.

The end of migration code is getting a bit complciated now, so
move out into its own function.

Signed-off-by: Dr. David Alan Gilbert 
---
 migration/migration.c | 91 +--
 trace-events  |  6 
 2 files changed, 72 insertions(+), 25 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index 611aca8..cf26d0d 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -906,7 +906,6 @@ static int open_return_path_on_source(MigrationState *ms)
 return 0;
 }
 
-__attribute__ (( unused )) /* Until later in patch series */
 /* Returns 0 if the RP was ok, otherwise there was an error on the RP */
 static int await_return_path_close_on_source(MigrationState *ms)
 {
@@ -1024,6 +1023,68 @@ fail:
 }
 
 /*
+ * Used by migration_thread when there's not much left pending.
+ * The caller 'breaks' the loop when this returns.
+ */
+static void migration_thread_end_of_iteration(MigrationState *s,
+  int current_active_state,
+  bool *old_vm_running,
+  int64_t *start_time)
+{
+int ret;
+if (s->state == MIGRATION_STATUS_ACTIVE) {
+qemu_mutex_lock_iothread();
+*start_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
+qemu_system_wakeup_request(QEMU_WAKEUP_REASON_OTHER);
+*old_vm_running = runstate_is_running();
+
+ret = vm_stop_force_state(RUN_STATE_FINISH_MIGRATE);
+if (ret >= 0) {
+qemu_file_set_rate_limit(s->file, INT64_MAX);
+qemu_savevm_state_complete_precopy(s->file);
+}
+qemu_mutex_unlock_iothread();
+
+if (ret < 0) {
+goto fail;
+}
+} else if (s->state == MIGRATION_STATUS_POSTCOPY_ACTIVE) {
+trace_migration_thread_end_of_iteration_postcopy_end();
+
+qemu_savevm_state_complete_postcopy(s->file);
+trace_migration_thread_end_of_iteration_postcopy_end_after_complete();
+}
+
+/*
+ * If rp was opened we must clean up the thread before
+ * cleaning everything else up (since if there are no failures
+ * it will wait for the destination to send it's status in
+ * a SHUT command).
+ * Postcopy opens rp if enabled (even if it's not avtivated)
+ */
+if (migrate_postcopy_ram()) {
+int rp_error;
+trace_migration_thread_end_of_iteration_postcopy_end_before_rp();
+rp_error = await_return_path_close_on_source(s);
+
trace_migration_thread_end_of_iteration_postcopy_end_after_rp(rp_error);
+if (rp_error) {
+goto fail;
+}
+}
+
+if (qemu_file_get_error(s->file)) {
+trace_migration_thread_end_of_iteration_file_err();
+goto fail;
+}
+
+migrate_set_state(s, current_active_state, MIGRATION_STATUS_COMPLETED);
+return;
+
+fail:
+migrate_set_state(s, current_active_state, MIGRATION_STATUS_FAILED);
+}
+
+/*
  * Master migration thread on the source VM.
  * It drives the migration and pumps the data down the outgoing channel.
  */
@@ -1098,31 +1159,11 @@ static void *migration_thread(void *opaque)
 /* Just another iteration step */
 qemu_savevm_state_iterate(s->file);
 } else {
-int ret;
-
-qemu_mutex_lock_iothread();
-start_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
-qemu_system_wakeup_request(QEMU_WAKEUP_REASON_OTHER);
-old_vm_running = runstate_is_running();
-
-ret = vm_stop_force_state(RUN_STATE_FINISH_MIGRATE);
-if (ret >= 0) {
-qemu_file_set_rate_limit(s->file, INT64_MAX);
-qemu_savevm_state_complete_precopy(s->file);
-}
-qemu_mutex_unlock_iothread();
+trace_migration_thread_low_pending(pending_size);
 
-if (ret < 0) {
-migrate_set_state(s, MIGRATION_STATUS_ACTIVE,
-  MIGRATION_STATUS_FAILED);
-break;
-}
-
-if (!qemu_file_get_error(s->file)) {
-migrate_set_state(s, MIGRATION_STATUS_ACTIVE,
-  MIGRATION_STATUS_COMPLETED);
-break;
-}
+migration_thread_end_of_iteration(s, current_active_type,
+&old_vm_running, &start_time);
+break;
 }
 }
 
diff --git a/trace-events b/trace-events
index efee724..a83eec2 100644
--- a/trace-events
+++ b/trace-events
@@ -1409,6 +1409,12 @@ migrate_send_rp_message(int msg_type, uint16_t len) "%d: 

[Qemu-devel] [PATCH v6 25/47] postcopy: OS support test

2015-04-14 Thread Dr. David Alan Gilbert (git)
From: "Dr. David Alan Gilbert" 

Provide a check to see if the OS we're running on has all the bits
needed for postcopy.

Creates postcopy-ram.c which will get most of the other helpers we need.

Signed-off-by: Dr. David Alan Gilbert 
---
 include/migration/postcopy-ram.h |  19 +
 migration/Makefile.objs  |   2 +-
 migration/postcopy-ram.c | 157 +++
 savevm.c |   5 ++
 4 files changed, 182 insertions(+), 1 deletion(-)
 create mode 100644 include/migration/postcopy-ram.h
 create mode 100644 migration/postcopy-ram.c

diff --git a/include/migration/postcopy-ram.h b/include/migration/postcopy-ram.h
new file mode 100644
index 000..d81934f
--- /dev/null
+++ b/include/migration/postcopy-ram.h
@@ -0,0 +1,19 @@
+/*
+ * Postcopy migration for RAM
+ *
+ * Copyright 2013 Red Hat, Inc. and/or its affiliates
+ *
+ * Authors:
+ *  Dave Gilbert  
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ *
+ */
+#ifndef QEMU_POSTCOPY_RAM_H
+#define QEMU_POSTCOPY_RAM_H
+
+/* Return true if the host supports everything we need to do postcopy-ram */
+bool postcopy_ram_supported_by_host(void);
+
+#endif
diff --git a/migration/Makefile.objs b/migration/Makefile.objs
index d929e96..0cac6d7 100644
--- a/migration/Makefile.objs
+++ b/migration/Makefile.objs
@@ -1,7 +1,7 @@
 common-obj-y += migration.o tcp.o
 common-obj-y += vmstate.o
 common-obj-y += qemu-file.o qemu-file-buf.o qemu-file-unix.o qemu-file-stdio.o
-common-obj-y += xbzrle.o
+common-obj-y += xbzrle.o postcopy-ram.o
 
 common-obj-$(CONFIG_RDMA) += rdma.o
 common-obj-$(CONFIG_POSIX) += exec.o unix.o fd.o
diff --git a/migration/postcopy-ram.c b/migration/postcopy-ram.c
new file mode 100644
index 000..7704bc1
--- /dev/null
+++ b/migration/postcopy-ram.c
@@ -0,0 +1,157 @@
+/*
+ * Postcopy migration for RAM
+ *
+ * Copyright 2013-2015 Red Hat, Inc. and/or its affiliates
+ *
+ * Authors:
+ *  Dave Gilbert  
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ *
+ */
+
+/*
+ * Postcopy is a migration technique where the execution flips from the
+ * source to the destination before all the data has been copied.
+ */
+
+#include 
+#include 
+#include 
+
+#include "qemu-common.h"
+#include "migration/migration.h"
+#include "migration/postcopy-ram.h"
+#include "sysemu/sysemu.h"
+#include "qemu/error-report.h"
+#include "trace.h"
+
+/* Postcopy needs to detect accesses to pages that haven't yet been copied
+ * across, and efficiently map new pages in, the techniques for doing this
+ * are target OS specific.
+ */
+#if defined(__linux__)
+
+#include 
+#include 
+#include 
+#include 
+#include  /* for __u64 */
+#endif
+
+#if defined(__linux__) && defined(__NR_userfaultfd)
+#include 
+
+static bool ufd_version_check(int ufd)
+{
+struct uffdio_api api_struct;
+uint64_t feature_mask;
+
+api_struct.api = UFFD_API;
+if (ioctl(ufd, UFFDIO_API, &api_struct)) {
+error_report("postcopy_ram_supported_by_host: UFFDIO_API failed: %s",
+ strerror(errno));
+return false;
+}
+
+feature_mask = (__u64)1 << _UFFDIO_REGISTER |
+   (__u64)1 << _UFFDIO_UNREGISTER;
+if ((api_struct.ioctls & feature_mask) != feature_mask) {
+error_report("Missing userfault features: %" PRIx64,
+ (uint64_t)(~api_struct.ioctls & feature_mask));
+return false;
+}
+
+return true;
+}
+
+bool postcopy_ram_supported_by_host(void)
+{
+long pagesize = getpagesize();
+int ufd = -1;
+bool ret = false; /* Error unless we change it */
+void *testarea = NULL;
+struct uffdio_register reg_struct;
+struct uffdio_range range_struct;
+uint64_t feature_mask;
+
+if ((1ul << qemu_target_page_bits()) > pagesize) {
+error_report("Target page size bigger than host page size");
+goto out;
+}
+
+ufd = syscall(__NR_userfaultfd, O_CLOEXEC);
+if (ufd == -1) {
+error_report("%s: userfaultfd not available: %s", __func__,
+ strerror(errno));
+goto out;
+}
+
+/* Version and features check */
+if (!ufd_version_check(ufd)) {
+goto out;
+}
+
+/*
+ *  We need to check that the ops we need are supported on anon memory
+ *  To do that we need to register a chunk and see the flags that
+ *  are returned.
+ */
+testarea = mmap(NULL, pagesize, PROT_READ | PROT_WRITE, MAP_PRIVATE |
+MAP_ANONYMOUS, -1, 0);
+if (testarea == MAP_FAILED) {
+error_report("%s: Failed to map test area: %s", __func__,
+ strerror(errno));
+goto out;
+}
+g_assert(((size_t)testarea & (pagesize-1)) == 0);
+
+reg_struct.range.start = (uintptr_t)testarea;
+reg_struct.range.len = pagesize;
+re

[Qemu-devel] [PATCH v6 21/47] Add wrappers and handlers for sending/receiving the postcopy-ram migration messages.

2015-04-14 Thread Dr. David Alan Gilbert (git)
From: "Dr. David Alan Gilbert" 

The state of the postcopy process is managed via a series of messages;
   * Add wrappers and handlers for sending/receiving these messages
   * Add state variable that track the current state of postcopy

Signed-off-by: Dr. David Alan Gilbert 
---
 include/migration/migration.h |  15 +++
 include/sysemu/sysemu.h   |  20 
 migration/migration.c |  13 +++
 savevm.c  | 247 ++
 trace-events  |  10 ++
 5 files changed, 305 insertions(+)

diff --git a/include/migration/migration.h b/include/migration/migration.h
index 5858788..e3389dc 100644
--- a/include/migration/migration.h
+++ b/include/migration/migration.h
@@ -52,6 +52,14 @@ typedef struct MigrationState MigrationState;
 
 typedef QLIST_HEAD(, LoadStateEntry) LoadStateEntry_Head;
 
+typedef enum {
+POSTCOPY_INCOMING_NONE = 0,  /* Initial state - no postcopy */
+POSTCOPY_INCOMING_ADVISE,
+POSTCOPY_INCOMING_LISTENING,
+POSTCOPY_INCOMING_RUNNING,
+POSTCOPY_INCOMING_END
+} PostcopyState;
+
 /* State for the incoming migration */
 struct MigrationIncomingState {
 QEMUFile *file;
@@ -59,6 +67,8 @@ struct MigrationIncomingState {
 /* See savevm.c */
 LoadStateEntry_Head loadvm_handlers;
 
+PostcopyState postcopy_state;
+
 /*
  * Free at the start of the main state load, set as the main thread 
finishes
  * loading state.
@@ -220,4 +230,9 @@ size_t ram_control_save_page(QEMUFile *f, ram_addr_t 
block_offset,
  ram_addr_t offset, size_t size,
  uint64_t *bytes_sent);
 
+PostcopyState postcopy_state_get(MigrationIncomingState *mis);
+
+/* Set the state and return the old state */
+PostcopyState postcopy_state_set(MigrationIncomingState *mis,
+ PostcopyState new_state);
 #endif
diff --git a/include/sysemu/sysemu.h b/include/sysemu/sysemu.h
index 49ba134..6dd2382 100644
--- a/include/sysemu/sysemu.h
+++ b/include/sysemu/sysemu.h
@@ -87,6 +87,17 @@ enum qemu_vm_cmd {
 MIG_CMD_INVALID = 0,   /* Must be 0 */
 MIG_CMD_OPEN_RETURN_PATH,  /* Tell the dest to open the Return path */
 MIG_CMD_PING,  /* Request a PONG on the RP */
+
+MIG_CMD_POSTCOPY_ADVISE = 20,  /* Prior to any page transfers, just
+  warn we might want to do PC */
+MIG_CMD_POSTCOPY_LISTEN,   /* Start listening for incoming
+  pages as it's running. */
+MIG_CMD_POSTCOPY_RUN,  /* Start execution */
+
+MIG_CMD_POSTCOPY_RAM_DISCARD,  /* A list of pages to discard that
+  were previously sent during
+  precopy but are dirty. */
+
 };
 
 bool qemu_savevm_state_blocked(Error **errp);
@@ -101,6 +112,15 @@ void qemu_savevm_command_send(QEMUFile *f, enum 
qemu_vm_cmd command,
   uint16_t len, uint8_t *data);
 void qemu_savevm_send_ping(QEMUFile *f, uint32_t value);
 void qemu_savevm_send_open_return_path(QEMUFile *f);
+void qemu_savevm_send_postcopy_advise(QEMUFile *f);
+void qemu_savevm_send_postcopy_listen(QEMUFile *f);
+void qemu_savevm_send_postcopy_run(QEMUFile *f);
+
+void qemu_savevm_send_postcopy_ram_discard(QEMUFile *f, const char *name,
+   uint16_t len,
+   uint64_t *start_list,
+   uint64_t *end_list);
+
 int qemu_loadvm_state(QEMUFile *f);
 
 typedef enum DisplayType
diff --git a/migration/migration.c b/migration/migration.c
index f641fc7..b72a4c7 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -995,3 +995,16 @@ void migrate_fd_connect(MigrationState *s)
 qemu_thread_create(&s->thread, "migration", migration_thread, s,
QEMU_THREAD_JOINABLE);
 }
+
+PostcopyState  postcopy_state_get(MigrationIncomingState *mis)
+{
+return atomic_fetch_add(&mis->postcopy_state, 0);
+}
+
+/* Set the state and return the old state */
+PostcopyState postcopy_state_set(MigrationIncomingState *mis,
+ PostcopyState new_state)
+{
+return atomic_xchg(&mis->postcopy_state, new_state);
+}
+
diff --git a/savevm.c b/savevm.c
index e7d42dc..8d2fe1f 100644
--- a/savevm.c
+++ b/savevm.c
@@ -39,6 +39,7 @@
 #include "exec/memory.h"
 #include "qmp-commands.h"
 #include "trace.h"
+#include "qemu/bitops.h"
 #include "qemu/iov.h"
 #include "block/snapshot.h"
 #include "block/qapi.h"
@@ -634,6 +635,77 @@ void qemu_savevm_send_open_return_path(QEMUFile *f)
 qemu_savevm_command_send(f, MIG_CMD_OPEN_RETURN_PATH, 0, NULL);
 }
 
+/* Send prior to any postcopy transfer */
+void qemu_savevm_send_postcopy_advise(QEMUFile *f)
+{
+uint64_t tmp[2];
+tmp[0] = cpu_to_be64(getpagesize());
+tmp[1] = cpu_to_be64(1ul << qemu_target_page_bits());
+
+trace_qemu_s

[Qemu-devel] [PATCH v6 28/47] Add qemu_savevm_state_complete_postcopy

2015-04-14 Thread Dr. David Alan Gilbert (git)
From: "Dr. David Alan Gilbert" 

Add qemu_savevm_state_complete_postcopy to complement
qemu_savevm_state_complete_precopy together with a new
save_live_complete_postcopy method on devices.

The save_live_complete_precopy method is called on
all devices during a precopy migration, and all non-postcopy
devices during a postcopy migration at the transition.

The save_live_complete_postcopy method is called at
the end of postcopy for all postcopiable devices.

Signed-off-by: Dr. David Alan Gilbert 
---
 arch_init.c |  1 +
 include/migration/vmstate.h |  1 +
 include/sysemu/sysemu.h |  1 +
 savevm.c| 50 ++---
 4 files changed, 50 insertions(+), 3 deletions(-)

diff --git a/arch_init.c b/arch_init.c
index 977e98b..0a49ace 100644
--- a/arch_init.c
+++ b/arch_init.c
@@ -1275,6 +1275,7 @@ static int ram_load(QEMUFile *f, void *opaque, int 
version_id)
 static SaveVMHandlers savevm_ram_handlers = {
 .save_live_setup = ram_save_setup,
 .save_live_iterate = ram_save_iterate,
+.save_live_complete_postcopy = ram_save_complete,
 .save_live_complete_precopy = ram_save_complete,
 .save_live_pending = ram_save_pending,
 .load_state = ram_load,
diff --git a/include/migration/vmstate.h b/include/migration/vmstate.h
index 50efb09..06bed0a 100644
--- a/include/migration/vmstate.h
+++ b/include/migration/vmstate.h
@@ -40,6 +40,7 @@ typedef struct SaveVMHandlers {
 SaveStateHandler *save_state;
 
 void (*cancel)(void *opaque);
+int (*save_live_complete_postcopy)(QEMUFile *f, void *opaque);
 int (*save_live_complete_precopy)(QEMUFile *f, void *opaque);
 
 /* This runs both outside and inside the iothread lock.  */
diff --git a/include/sysemu/sysemu.h b/include/sysemu/sysemu.h
index e45ef62..248f0d6 100644
--- a/include/sysemu/sysemu.h
+++ b/include/sysemu/sysemu.h
@@ -108,6 +108,7 @@ void qemu_savevm_state_begin(QEMUFile *f,
  const MigrationParams *params);
 void qemu_savevm_state_header(QEMUFile *f);
 int qemu_savevm_state_iterate(QEMUFile *f);
+void qemu_savevm_state_complete_postcopy(QEMUFile *f);
 void qemu_savevm_state_complete_precopy(QEMUFile *f);
 void qemu_savevm_state_cancel(void);
 void qemu_savevm_state_pending(QEMUFile *f, uint64_t max_size,
diff --git a/savevm.c b/savevm.c
index 23cc99e..c2d6241 100644
--- a/savevm.c
+++ b/savevm.c
@@ -866,7 +866,46 @@ int qemu_savevm_state_iterate(QEMUFile *f)
 static bool should_send_vmdesc(void)
 {
 MachineState *machine = MACHINE(qdev_get_machine());
-return !machine->suppress_vmdesc;
+bool in_postcopy = migration_postcopy_phase(migrate_get_current());
+return !machine->suppress_vmdesc && !in_postcopy;
+}
+
+/*
+ * Calls the save_live_complete_postcopy methods
+ * causing the last few pages to be sent immediately and doing any associated
+ * cleanup.
+ * Note postcopy also calls qemu_savevm_state_complete_precopy to complete
+ * all the other devices, but that happens at the point we switch to postcopy.
+ */
+void qemu_savevm_state_complete_postcopy(QEMUFile *f)
+{
+SaveStateEntry *se;
+int ret;
+
+QTAILQ_FOREACH(se, &savevm_handlers, entry) {
+if (!se->ops || !se->ops->save_live_complete_postcopy) {
+continue;
+}
+if (se->ops && se->ops->is_active) {
+if (!se->ops->is_active(se->opaque)) {
+continue;
+}
+}
+trace_savevm_section_start(se->idstr, se->section_id);
+/* Section type */
+qemu_put_byte(f, QEMU_VM_SECTION_END);
+qemu_put_be32(f, se->section_id);
+
+ret = se->ops->save_live_complete_postcopy(f, se->opaque);
+trace_savevm_section_end(se->idstr, se->section_id, ret);
+if (ret < 0) {
+qemu_file_set_error(f, ret);
+return;
+}
+}
+
+qemu_put_byte(f, QEMU_VM_EOF);
+qemu_fflush(f);
 }
 
 void qemu_savevm_state_complete_precopy(QEMUFile *f)
@@ -875,13 +914,15 @@ void qemu_savevm_state_complete_precopy(QEMUFile *f)
 int vmdesc_len;
 SaveStateEntry *se;
 int ret;
+bool in_postcopy = migration_postcopy_phase(migrate_get_current());
 
 trace_savevm_state_complete_precopy();
 
 cpu_synchronize_all_states();
 
 QTAILQ_FOREACH(se, &savevm_handlers, entry) {
-if (!se->ops || !se->ops->save_live_complete_precopy) {
+if (!se->ops || !se->ops->save_live_complete_precopy ||
+(in_postcopy && se->ops->save_live_complete_postcopy)) {
 continue;
 }
 if (se->ops && se->ops->is_active) {
@@ -935,7 +976,10 @@ void qemu_savevm_state_complete_precopy(QEMUFile *f)
 trace_savevm_section_end(se->idstr, se->section_id, 0);
 }
 
-qemu_put_byte(f, QEMU_VM_EOF);
+if (!in_postcopy) {
+/* Postcopy stream will still be going */
+qemu_put_byte(f, QEMU_VM_EOF);
+}
 
 json_end_array(vmdesc);
 qjson_finish(vmdesc);
-- 
2.1.0

[Qemu-devel] [PATCH v6 30/47] postcopy: Incoming initialisation

2015-04-14 Thread Dr. David Alan Gilbert (git)
From: "Dr. David Alan Gilbert" 

Signed-off-by: Dr. David Alan Gilbert 
Reviewed-by: David Gibson 
---
 arch_init.c  |  11 
 include/migration/migration.h|   3 +
 include/migration/postcopy-ram.h |  12 
 migration/postcopy-ram.c | 116 +++
 savevm.c |   4 ++
 trace-events |   2 +
 6 files changed, 148 insertions(+)

diff --git a/arch_init.c b/arch_init.c
index efc2938..2c937d1 100644
--- a/arch_init.c
+++ b/arch_init.c
@@ -1353,6 +1353,17 @@ void ram_handle_compressed(void *host, uint8_t ch, 
uint64_t size)
 }
 }
 
+/*
+ * Allocate data structures etc needed by incoming migration with postcopy-ram
+ * postcopy-ram's similarly names postcopy_ram_incoming_init does the work
+ */
+int ram_postcopy_incoming_init(MigrationIncomingState *mis)
+{
+size_t ram_pages = last_ram_offset() >> TARGET_PAGE_BITS;
+
+return postcopy_ram_incoming_init(mis, ram_pages);
+}
+
 static int ram_load(QEMUFile *f, void *opaque, int version_id)
 {
 int flags = 0, ret = 0;
diff --git a/include/migration/migration.h b/include/migration/migration.h
index 15707fc..8c8afc4 100644
--- a/include/migration/migration.h
+++ b/include/migration/migration.h
@@ -73,6 +73,8 @@ struct MigrationIncomingState {
  */
 QemuEvent  main_thread_load_event;
 
+/* For the kernel to send us notifications */
+intuserfault_fd;
 QEMUFile *return_path;
 QemuMutex  rp_mutex;/* We send replies from multiple threads */
 };
@@ -190,6 +192,7 @@ int ram_postcopy_send_discard_bitmap(MigrationState *ms);
 /* For incoming postcopy discard */
 int ram_discard_range(MigrationIncomingState *mis, const char *block_name,
   uint64_t start, uint64_t end);
+int ram_postcopy_incoming_init(MigrationIncomingState *mis);
 
 /**
  * @migrate_add_blocker - prevent migration from proceeding
diff --git a/include/migration/postcopy-ram.h b/include/migration/postcopy-ram.h
index 1d38f76..b46af08 100644
--- a/include/migration/postcopy-ram.h
+++ b/include/migration/postcopy-ram.h
@@ -17,6 +17,18 @@
 bool postcopy_ram_supported_by_host(void);
 
 /*
+ * Initialise postcopy-ram, setting the RAM to a state where we can go into
+ * postcopy later; must be called prior to any precopy.
+ * called from arch_init's similarly named ram_postcopy_incoming_init
+ */
+int postcopy_ram_incoming_init(MigrationIncomingState *mis, size_t ram_pages);
+
+/*
+ * At the end of a migration where postcopy_ram_incoming_init was called.
+ */
+int postcopy_ram_incoming_cleanup(MigrationIncomingState *mis);
+
+/*
  * Discard the contents of memory start..end inclusive.
  * We can assume that if we've been called postcopy_ram_hosttest returned true
  */
diff --git a/migration/postcopy-ram.c b/migration/postcopy-ram.c
index a10f3ca..16b78c2 100644
--- a/migration/postcopy-ram.c
+++ b/migration/postcopy-ram.c
@@ -176,6 +176,111 @@ int postcopy_ram_discard_range(MigrationIncomingState 
*mis, uint8_t *start,
 return 0;
 }
 
+/*
+ * Setup an area of RAM so that it *can* be used for postcopy later; this
+ * must be done right at the start prior to pre-copy.
+ * opaque should be the MIS.
+ */
+static int init_area(const char *block_name, void *host_addr,
+ ram_addr_t offset, ram_addr_t length, void *opaque)
+{
+MigrationIncomingState *mis = opaque;
+
+trace_postcopy_init_area(block_name, host_addr, offset, length);
+
+/*
+ * We need the whole of RAM to be truly empty for postcopy, so things
+ * like ROMs and any data tables built during init must be zero'd
+ * - we're going to get the copy from the source anyway.
+ * (Precopy will just overwrite this data, so doesn't need the discard)
+ */
+if (postcopy_ram_discard_range(mis, host_addr, (host_addr + length - 1))) {
+return -1;
+}
+
+/*
+ * We also need the area to be normal 4k pages, not huge pages
+ * (otherwise we can't be sure we can atomically place the
+ * 4k page in later).  THP might come along and map a 2MB page
+ * and when it's partially accessed in precopy it might not break
+ * it down, but leave a 2MB zero'd page.
+ */
+#ifdef MADV_NOHUGEPAGE
+if (madvise(host_addr, length, MADV_NOHUGEPAGE)) {
+error_report("%s: NOHUGEPAGE: %s", __func__, strerror(errno));
+return -1;
+}
+#endif
+
+return 0;
+}
+
+/*
+ * At the end of migration, undo the effects of init_area
+ * opaque should be the MIS.
+ */
+static int cleanup_area(const char *block_name, void *host_addr,
+ram_addr_t offset, ram_addr_t length, void *opaque)
+{
+MigrationIncomingState *mis = opaque;
+struct uffdio_range range_struct;
+trace_postcopy_cleanup_area(block_name, host_addr, offset, length);
+
+/*
+ * We turned off hugepage for the precopy stage with postcopy enabled
+ * we can turn it back on now.
+ */
+#ifdef MADV_HUGEPAG

[Qemu-devel] [PATCH v6 24/47] Modify save_live_pending for postcopy

2015-04-14 Thread Dr. David Alan Gilbert (git)
From: "Dr. David Alan Gilbert" 

Modify save_live_pending to return separate postcopiable and
non-postcopiable counts.

Signed-off-by: Dr. David Alan Gilbert 
---
 arch_init.c |  8 ++--
 include/migration/vmstate.h |  5 +++--
 include/sysemu/sysemu.h |  4 +++-
 migration/block.c   |  7 +--
 migration/migration.c   |  9 +++--
 savevm.c| 21 +
 trace-events|  2 +-
 7 files changed, 42 insertions(+), 14 deletions(-)

diff --git a/arch_init.c b/arch_init.c
index 2b0cd18..977e98b 100644
--- a/arch_init.c
+++ b/arch_init.c
@@ -1053,7 +1053,9 @@ static int ram_save_complete(QEMUFile *f, void *opaque)
 return 0;
 }
 
-static uint64_t ram_save_pending(QEMUFile *f, void *opaque, uint64_t max_size)
+static void ram_save_pending(QEMUFile *f, void *opaque, uint64_t max_size,
+ uint64_t *non_postcopiable_pending,
+ uint64_t *postcopiable_pending)
 {
 uint64_t remaining_size;
 
@@ -1067,7 +1069,9 @@ static uint64_t ram_save_pending(QEMUFile *f, void 
*opaque, uint64_t max_size)
 qemu_mutex_unlock_iothread();
 remaining_size = ram_save_remaining() * TARGET_PAGE_SIZE;
 }
-return remaining_size;
+
+*non_postcopiable_pending = 0;
+*postcopiable_pending = remaining_size;
 }
 
 static int load_xbzrle(QEMUFile *f, ram_addr_t addr, void *host)
diff --git a/include/migration/vmstate.h b/include/migration/vmstate.h
index b86b3d9..50efb09 100644
--- a/include/migration/vmstate.h
+++ b/include/migration/vmstate.h
@@ -54,8 +54,9 @@ typedef struct SaveVMHandlers {
 
 /* This runs outside the iothread lock!  */
 int (*save_live_setup)(QEMUFile *f, void *opaque);
-uint64_t (*save_live_pending)(QEMUFile *f, void *opaque, uint64_t 
max_size);
-
+void (*save_live_pending)(QEMUFile *f, void *opaque, uint64_t max_size,
+  uint64_t *non_postcopiable_pending,
+  uint64_t *postcopiable_pending);
 LoadStateHandler *load_state;
 } SaveVMHandlers;
 
diff --git a/include/sysemu/sysemu.h b/include/sysemu/sysemu.h
index 0e3bf1e..e45ef62 100644
--- a/include/sysemu/sysemu.h
+++ b/include/sysemu/sysemu.h
@@ -110,7 +110,9 @@ void qemu_savevm_state_header(QEMUFile *f);
 int qemu_savevm_state_iterate(QEMUFile *f);
 void qemu_savevm_state_complete_precopy(QEMUFile *f);
 void qemu_savevm_state_cancel(void);
-uint64_t qemu_savevm_state_pending(QEMUFile *f, uint64_t max_size);
+void qemu_savevm_state_pending(QEMUFile *f, uint64_t max_size,
+   uint64_t *res_non_postcopiable,
+   uint64_t *res_postcopiable);
 void qemu_savevm_command_send(QEMUFile *f, enum qemu_vm_cmd command,
   uint16_t len, uint8_t *data);
 void qemu_savevm_send_ping(QEMUFile *f, uint32_t value);
diff --git a/migration/block.c b/migration/block.c
index 00f4998..802dbfa 100644
--- a/migration/block.c
+++ b/migration/block.c
@@ -755,7 +755,9 @@ static int block_save_complete(QEMUFile *f, void *opaque)
 return 0;
 }
 
-static uint64_t block_save_pending(QEMUFile *f, void *opaque, uint64_t 
max_size)
+static void block_save_pending(QEMUFile *f, void *opaque, uint64_t max_size,
+   uint64_t *non_postcopiable_pending,
+   uint64_t *postcopiable_pending)
 {
 /* Estimate pending number of bytes to send */
 uint64_t pending;
@@ -774,7 +776,8 @@ static uint64_t block_save_pending(QEMUFile *f, void 
*opaque, uint64_t max_size)
 qemu_mutex_unlock_iothread();
 
 DPRINTF("Enter save live pending  %" PRIu64 "\n", pending);
-return pending;
+*non_postcopiable_pending = pending;
+*postcopiable_pending = 0;
 }
 
 static int block_load(QEMUFile *f, void *opaque, int version_id)
diff --git a/migration/migration.c b/migration/migration.c
index 45284b2..ae737d1 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -889,8 +889,13 @@ static void *migration_thread(void *opaque)
 uint64_t pending_size;
 
 if (!qemu_file_rate_limit(s->file)) {
-pending_size = qemu_savevm_state_pending(s->file, max_size);
-trace_migrate_pending(pending_size, max_size);
+uint64_t pend_post, pend_nonpost;
+
+qemu_savevm_state_pending(s->file, max_size, &pend_nonpost,
+  &pend_post);
+pending_size = pend_nonpost + pend_post;
+trace_migrate_pending(pending_size, max_size,
+  pend_post, pend_nonpost);
 if (pending_size && pending_size >= max_size) {
 qemu_savevm_state_iterate(s->file);
 } else {
diff --git a/savevm.c b/savevm.c
index c281d1b..79bbded 100644
--- a/savevm.c
+++ b/savevm.c
@@ -950,10 +950,20 @@ void qemu_savevm_state_complete_precopy(QEMUFile *f)
 qemu_fflush(f);
 }
 
-uint64_t qemu_

[Qemu-devel] [PATCH v6 18/47] Move loadvm_handlers into MigrationIncomingState

2015-04-14 Thread Dr. David Alan Gilbert (git)
From: "Dr. David Alan Gilbert" 

In postcopy we need the loadvm_handlers to be used in a couple
of different instances of the loadvm loop/routine, and thus
it can't be local any more.

Signed-off-by: Dr. David Alan Gilbert 
Reviewed-by: David Gibson 
---
 include/migration/migration.h |  5 +
 include/migration/vmstate.h   |  2 ++
 include/qemu/typedefs.h   |  1 +
 migration/migration.c |  2 ++
 savevm.c  | 28 
 5 files changed, 26 insertions(+), 12 deletions(-)

diff --git a/include/migration/migration.h b/include/migration/migration.h
index fb7551d..92a6068 100644
--- a/include/migration/migration.h
+++ b/include/migration/migration.h
@@ -50,10 +50,15 @@ enum mig_rp_message_type {
 
 typedef struct MigrationState MigrationState;
 
+typedef QLIST_HEAD(, LoadStateEntry) LoadStateEntry_Head;
+
 /* State for the incoming migration */
 struct MigrationIncomingState {
 QEMUFile *file;
 
+/* See savevm.c */
+LoadStateEntry_Head loadvm_handlers;
+
 QEMUFile *return_path;
 QemuMutex  rp_mutex;/* We send replies from multiple threads */
 };
diff --git a/include/migration/vmstate.h b/include/migration/vmstate.h
index 55cd174..b86b3d9 100644
--- a/include/migration/vmstate.h
+++ b/include/migration/vmstate.h
@@ -812,6 +812,8 @@ extern const VMStateInfo vmstate_info_bitmap;
 
 #define SELF_ANNOUNCE_ROUNDS 5
 
+void loadvm_free_handlers(MigrationIncomingState *mis);
+
 int vmstate_load_state(QEMUFile *f, const VMStateDescription *vmsd,
void *opaque, int version_id);
 void vmstate_save_state(QEMUFile *f, const VMStateDescription *vmsd,
diff --git a/include/qemu/typedefs.h b/include/qemu/typedefs.h
index 74dfad3..6fdcbcd 100644
--- a/include/qemu/typedefs.h
+++ b/include/qemu/typedefs.h
@@ -31,6 +31,7 @@ typedef struct I2CBus I2CBus;
 typedef struct I2SCodec I2SCodec;
 typedef struct ISABus ISABus;
 typedef struct ISADevice ISADevice;
+typedef struct LoadStateEntry LoadStateEntry;
 typedef struct MACAddr MACAddr;
 typedef struct MachineClass MachineClass;
 typedef struct MachineState MachineState;
diff --git a/migration/migration.c b/migration/migration.c
index 88355e2..bcad9a4 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -70,6 +70,7 @@ MigrationIncomingState 
*migration_incoming_state_new(QEMUFile* f)
 {
 mis_current = g_malloc0(sizeof(MigrationIncomingState));
 mis_current->file = f;
+QLIST_INIT(&mis_current->loadvm_handlers);
 qemu_mutex_init(&mis_current->rp_mutex);
 
 return mis_current;
@@ -77,6 +78,7 @@ MigrationIncomingState 
*migration_incoming_state_new(QEMUFile* f)
 
 void migration_incoming_state_destroy(void)
 {
+loadvm_free_handlers(mis_current);
 g_free(mis_current);
 mis_current = NULL;
 }
diff --git a/savevm.c b/savevm.c
index f6b8b90..ef174d7 100644
--- a/savevm.c
+++ b/savevm.c
@@ -1027,18 +1027,26 @@ static int loadvm_process_command(QEMUFile *f)
 return 0;
 }
 
-typedef struct LoadStateEntry {
+struct LoadStateEntry {
 QLIST_ENTRY(LoadStateEntry) entry;
 SaveStateEntry *se;
 int section_id;
 int version_id;
-} LoadStateEntry;
+};
 
-int qemu_loadvm_state(QEMUFile *f)
+void loadvm_free_handlers(MigrationIncomingState *mis)
 {
-QLIST_HEAD(, LoadStateEntry) loadvm_handlers =
-QLIST_HEAD_INITIALIZER(loadvm_handlers);
 LoadStateEntry *le, *new_le;
+
+QLIST_FOREACH_SAFE(le, &mis->loadvm_handlers, entry, new_le) {
+QLIST_REMOVE(le, entry);
+g_free(le);
+}
+}
+
+int qemu_loadvm_state(QEMUFile *f)
+{
+MigrationIncomingState *mis = migration_incoming_get_current();
 Error *local_err = NULL;
 uint8_t section_type;
 unsigned int v;
@@ -1069,6 +1077,7 @@ int qemu_loadvm_state(QEMUFile *f)
 while ((section_type = qemu_get_byte(f)) != QEMU_VM_EOF) {
 uint32_t instance_id, version_id, section_id;
 SaveStateEntry *se;
+LoadStateEntry *le;
 char idstr[256];
 
 trace_qemu_loadvm_state_section(section_type);
@@ -1110,7 +1119,7 @@ int qemu_loadvm_state(QEMUFile *f)
 le->se = se;
 le->section_id = section_id;
 le->version_id = version_id;
-QLIST_INSERT_HEAD(&loadvm_handlers, le, entry);
+QLIST_INSERT_HEAD(&mis->loadvm_handlers, le, entry);
 
 ret = vmstate_load(f, le->se, le->version_id);
 if (ret < 0) {
@@ -1124,7 +1133,7 @@ int qemu_loadvm_state(QEMUFile *f)
 section_id = qemu_get_be32(f);
 
 trace_qemu_loadvm_state_section_partend(section_id);
-QLIST_FOREACH(le, &loadvm_handlers, entry) {
+QLIST_FOREACH(le, &mis->loadvm_handlers, entry) {
 if (le->section_id == section_id) {
 break;
 }
@@ -1178,11 +1187,6 @@ int qemu_loadvm_state(QEMUFile *f)
 ret = 0;
 
 out:
-QLIST_FOREACH_SAFE(le, &loadvm_handlers, entry, new_le) {
-QLIST_REMOVE(le, ent

[Qemu-devel] [PATCH v6 26/47] migrate_start_postcopy: Command to trigger transition to postcopy

2015-04-14 Thread Dr. David Alan Gilbert (git)
From: "Dr. David Alan Gilbert" 

Once postcopy is enabled (with migrate_set_capability), the migration
will still start on precopy mode.  To cause a transition into postcopy
the:

  migrate_start_postcopy

command must be issued.  Postcopy will start sometime after this
(when it's next checked in the migration loop).

Issuing the command before migration has started will error,
and issuing after it has finished is ignored.

Signed-off-by: Dr. David Alan Gilbert 
Reviewed-by: Eric Blake 
---
 hmp-commands.hx   | 15 +++
 hmp.c |  7 +++
 hmp.h |  1 +
 include/migration/migration.h |  3 +++
 migration/migration.c | 22 ++
 qapi-schema.json  |  8 
 qmp-commands.hx   | 19 +++
 7 files changed, 75 insertions(+)

diff --git a/hmp-commands.hx b/hmp-commands.hx
index 3089533..ff620ce 100644
--- a/hmp-commands.hx
+++ b/hmp-commands.hx
@@ -993,6 +993,21 @@ Enable/Disable the usage of a capability @var{capability} 
for migration.
 ETEXI
 
 {
+.name   = "migrate_start_postcopy",
+.args_type  = "",
+.params = "",
+.help   = "Switch migration to postcopy mode",
+.mhandler.cmd = hmp_migrate_start_postcopy,
+},
+
+STEXI
+@item migrate_start_postcopy
+@findex migrate_start_postcopy
+Switch in-progress migration to postcopy mode. Ignored after the end of
+migration (or once already in postcopy).
+ETEXI
+
+{
 .name   = "client_migrate_info",
 .args_type  = 
"protocol:s,hostname:s,port:i?,tls-port:i?,cert-subject:s?",
 .params = "protocol hostname port tls-port cert-subject",
diff --git a/hmp.c b/hmp.c
index f31ae27..60e4411 100644
--- a/hmp.c
+++ b/hmp.c
@@ -1184,6 +1184,13 @@ void hmp_migrate_set_capability(Monitor *mon, const 
QDict *qdict)
 }
 }
 
+void hmp_migrate_start_postcopy(Monitor *mon, const QDict *qdict)
+{
+Error *err = NULL;
+qmp_migrate_start_postcopy(&err);
+hmp_handle_error(mon, &err);
+}
+
 void hmp_set_password(Monitor *mon, const QDict *qdict)
 {
 const char *protocol  = qdict_get_str(qdict, "protocol");
diff --git a/hmp.h b/hmp.h
index 2b9308b..c79a7b5 100644
--- a/hmp.h
+++ b/hmp.h
@@ -65,6 +65,7 @@ void hmp_migrate_set_downtime(Monitor *mon, const QDict 
*qdict);
 void hmp_migrate_set_speed(Monitor *mon, const QDict *qdict);
 void hmp_migrate_set_capability(Monitor *mon, const QDict *qdict);
 void hmp_migrate_set_cache_size(Monitor *mon, const QDict *qdict);
+void hmp_migrate_start_postcopy(Monitor *mon, const QDict *qdict);
 void hmp_set_password(Monitor *mon, const QDict *qdict);
 void hmp_expire_password(Monitor *mon, const QDict *qdict);
 void hmp_eject(Monitor *mon, const QDict *qdict);
diff --git a/include/migration/migration.h b/include/migration/migration.h
index 1b9a535..4db9393 100644
--- a/include/migration/migration.h
+++ b/include/migration/migration.h
@@ -110,6 +110,9 @@ struct MigrationState
 int64_t xbzrle_cache_size;
 int64_t setup_time;
 int64_t dirty_sync_count;
+
+/* Flag set once the migration has been asked to enter postcopy */
+bool start_postcopy;
 };
 
 void process_incoming_migration(QEMUFile *f);
diff --git a/migration/migration.c b/migration/migration.c
index ae737d1..17da8ab 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -377,6 +377,28 @@ void 
qmp_migrate_set_capabilities(MigrationCapabilityStatusList *params,
 }
 }
 
+void qmp_migrate_start_postcopy(Error **errp)
+{
+MigrationState *s = migrate_get_current();
+
+if (!migrate_postcopy_ram()) {
+error_setg(errp, "Enable postcopy with migration_set_capability before"
+ " the start of migration");
+return;
+}
+
+if (s->state == MIGRATION_STATUS_NONE) {
+error_setg(errp, "Postcopy must be started after migration has been"
+ " started");
+return;
+}
+/*
+ * we don't error if migration has finished since that would be racy
+ * with issuing this command.
+ */
+atomic_set(&s->start_postcopy, true);
+}
+
 /* shared migration helpers */
 
 static void migrate_set_state(MigrationState *s, int old_state, int new_state)
diff --git a/qapi-schema.json b/qapi-schema.json
index dcd3e62..faf572f 100644
--- a/qapi-schema.json
+++ b/qapi-schema.json
@@ -566,6 +566,14 @@
 { 'command': 'query-migrate-capabilities', 'returns':   
['MigrationCapabilityStatus']}
 
 ##
+# @migrate-start-postcopy
+#
+# Switch migration to postcopy mode
+#
+# Since: 2.3
+{ 'command': 'migrate-start-postcopy' }
+
+##
 # @MouseInfo:
 #
 # Information about a mouse device.
diff --git a/qmp-commands.hx b/qmp-commands.hx
index 3a42ad0..d564d7b 100644
--- a/qmp-commands.hx
+++ b/qmp-commands.hx
@@ -713,6 +713,25 @@ Example:
 
 EQMP
 {
+.name   = "migrate-start-postcopy",
+.args_type  = "",
+.mhandler.cmd_new = qmp_marshal

[Qemu-devel] [PATCH v6 32/47] Postcopy: Postcopy startup in migration thread

2015-04-14 Thread Dr. David Alan Gilbert (git)
From: "Dr. David Alan Gilbert" 

Rework the migration thread to setup and start postcopy.

Signed-off-by: Dr. David Alan Gilbert 
---
 include/migration/migration.h |   3 +
 migration/migration.c | 163 --
 trace-events  |   4 ++
 3 files changed, 165 insertions(+), 5 deletions(-)

diff --git a/include/migration/migration.h b/include/migration/migration.h
index 36451de..c02266e 100644
--- a/include/migration/migration.h
+++ b/include/migration/migration.h
@@ -119,6 +119,9 @@ struct MigrationState
 /* Flag set once the migration has been asked to enter postcopy */
 bool start_postcopy;
 
+/* Flag set once the migration thread is running (and needs joining) */
+bool started_migration_thread;
+
 /* bitmap of pages that have been sent at least once
  * only maintained and used in postcopy at the moment
  * where it's used to send the dirtymap at the start
diff --git a/migration/migration.c b/migration/migration.c
index 63205c3..611aca8 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -469,7 +469,10 @@ static void migrate_fd_cleanup(void *opaque)
 if (s->file) {
 trace_migrate_fd_cleanup();
 qemu_mutex_unlock_iothread();
-qemu_thread_join(&s->thread);
+if (s->started_migration_thread) {
+qemu_thread_join(&s->thread);
+s->started_migration_thread = false;
+}
 qemu_mutex_lock_iothread();
 
 qemu_fclose(s->file);
@@ -886,7 +889,6 @@ out:
 return NULL;
 }
 
-__attribute__ (( unused )) /* Until later in patch series */
 static int open_return_path_on_source(MigrationState *ms)
 {
 
@@ -925,23 +927,141 @@ static int 
await_return_path_close_on_source(MigrationState *ms)
 }
 
 /*
+ * Switch from normal iteration to postcopy
+ * Returns non-0 on error
+ */
+static int postcopy_start(MigrationState *ms, bool *old_vm_running)
+{
+int ret;
+const QEMUSizedBuffer *qsb;
+int64_t time_at_stop = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
+migrate_set_state(ms, MIGRATION_STATUS_ACTIVE,
+  MIGRATION_STATUS_POSTCOPY_ACTIVE);
+
+trace_postcopy_start();
+qemu_mutex_lock_iothread();
+trace_postcopy_start_set_run();
+
+qemu_system_wakeup_request(QEMU_WAKEUP_REASON_OTHER);
+*old_vm_running = runstate_is_running();
+
+ret = vm_stop_force_state(RUN_STATE_FINISH_MIGRATE);
+
+if (ret < 0) {
+goto fail;
+}
+
+/*
+ * in Finish migrate and with the io-lock held everything should
+ * be quiet, but we've potentially still got dirty pages and we
+ * need to tell the destination to throw any pages it's already received
+ * that are dirty
+ */
+if (ram_postcopy_send_discard_bitmap(ms)) {
+error_report("postcopy send discard bitmap failed");
+goto fail;
+}
+
+/*
+ * send rest of state - note things that are doing postcopy
+ * will notice we're in POSTCOPY_ACTIVE and not actually
+ * wrap their state up here
+ */
+qemu_file_set_rate_limit(ms->file, INT64_MAX);
+/* Ping just for debugging, helps line traces up */
+qemu_savevm_send_ping(ms->file, 2);
+
+/*
+ * We need to leave the fd free for page transfers during the
+ * loading of the device state, so wrap all the remaining
+ * commands and state into a package that gets sent in one go
+ */
+QEMUFile *fb = qemu_bufopen("w", NULL);
+if (!fb) {
+error_report("Failed to create buffered file");
+goto fail;
+}
+
+qemu_savevm_state_complete_precopy(fb);
+qemu_savevm_send_ping(fb, 3);
+
+qemu_savevm_send_postcopy_run(fb);
+
+/* <><> end of stuff going into the package */
+qsb = qemu_buf_get(fb);
+
+/* Now send that blob */
+if (qemu_savevm_send_packaged(ms->file, qsb)) {
+goto fail_closefb;
+}
+qemu_fclose(fb);
+ms->downtime =  qemu_clock_get_ms(QEMU_CLOCK_REALTIME) - time_at_stop;
+
+qemu_mutex_unlock_iothread();
+
+/*
+ * Although this ping is just for debug, it could potentially be
+ * used for getting a better measurement of downtime at the source.
+ */
+qemu_savevm_send_ping(ms->file, 4);
+
+ret = qemu_file_get_error(ms->file);
+if (ret) {
+error_report("postcopy_start: Migration stream errored");
+migrate_set_state(ms, MIGRATION_STATUS_POSTCOPY_ACTIVE,
+  MIGRATION_STATUS_FAILED);
+}
+
+return ret;
+
+fail_closefb:
+qemu_fclose(fb);
+fail:
+migrate_set_state(ms, MIGRATION_STATUS_POSTCOPY_ACTIVE,
+  MIGRATION_STATUS_FAILED);
+qemu_mutex_unlock_iothread();
+return -1;
+}
+
+/*
  * Master migration thread on the source VM.
  * It drives the migration and pumps the data down the outgoing channel.
  */
 static void *migration_thread(void *opaque)
 {
 MigrationState *s = opaque;
+/* Used by the bandwidth calcs, updated later */
 int64

[Qemu-devel] [PATCH v6 22/47] MIG_CMD_PACKAGED: Send a packaged chunk of migration stream

2015-04-14 Thread Dr. David Alan Gilbert (git)
From: "Dr. David Alan Gilbert" 

MIG_CMD_PACKAGED is a migration command that wraps a chunk of migration
stream inside a package whose length can be determined purely by reading
its header.  The destination guarantees that the whole MIG_CMD_PACKAGED
is read off the stream prior to parsing the contents.

This is used by postcopy to load device state (from the package)
while leaving the main stream free to receive memory pages.

Signed-off-by: Dr. David Alan Gilbert 
---
 include/sysemu/sysemu.h |  4 +++
 savevm.c| 94 +
 trace-events|  4 +++
 3 files changed, 102 insertions(+)

diff --git a/include/sysemu/sysemu.h b/include/sysemu/sysemu.h
index 6dd2382..0e3bf1e 100644
--- a/include/sysemu/sysemu.h
+++ b/include/sysemu/sysemu.h
@@ -87,6 +87,7 @@ enum qemu_vm_cmd {
 MIG_CMD_INVALID = 0,   /* Must be 0 */
 MIG_CMD_OPEN_RETURN_PATH,  /* Tell the dest to open the Return path */
 MIG_CMD_PING,  /* Request a PONG on the RP */
+MIG_CMD_PACKAGED,  /* Send a wrapped stream within this stream */
 
 MIG_CMD_POSTCOPY_ADVISE = 20,  /* Prior to any page transfers, just
   warn we might want to do PC */
@@ -100,6 +101,8 @@ enum qemu_vm_cmd {
 
 };
 
+#define MAX_VM_CMD_PACKAGED_SIZE (1ul << 24)
+
 bool qemu_savevm_state_blocked(Error **errp);
 void qemu_savevm_state_begin(QEMUFile *f,
  const MigrationParams *params);
@@ -112,6 +115,7 @@ void qemu_savevm_command_send(QEMUFile *f, enum qemu_vm_cmd 
command,
   uint16_t len, uint8_t *data);
 void qemu_savevm_send_ping(QEMUFile *f, uint32_t value);
 void qemu_savevm_send_open_return_path(QEMUFile *f);
+int qemu_savevm_send_packaged(QEMUFile *f, const QEMUSizedBuffer *qsb);
 void qemu_savevm_send_postcopy_advise(QEMUFile *f);
 void qemu_savevm_send_postcopy_listen(QEMUFile *f);
 void qemu_savevm_send_postcopy_run(QEMUFile *f);
diff --git a/savevm.c b/savevm.c
index 8d2fe1f..1e940af 100644
--- a/savevm.c
+++ b/savevm.c
@@ -635,6 +635,50 @@ void qemu_savevm_send_open_return_path(QEMUFile *f)
 qemu_savevm_command_send(f, MIG_CMD_OPEN_RETURN_PATH, 0, NULL);
 }
 
+/* We have a buffer of data to send; we don't want that all to be loaded
+ * by the command itself, so the command contains just the length of the
+ * extra buffer that we then send straight after it.
+ * TODO: Must be a better way to organise that
+ *
+ * Returns:
+ *0 on success
+ *-ve on error
+ */
+int qemu_savevm_send_packaged(QEMUFile *f, const QEMUSizedBuffer *qsb)
+{
+size_t cur_iov;
+size_t len = qsb_get_length(qsb);
+uint32_t tmp;
+
+if (len > MAX_VM_CMD_PACKAGED_SIZE) {
+error_report("%s: Unreasonably large packaged state: %zu",
+ __func__, len);
+return -1;
+}
+
+tmp = cpu_to_be32(len);
+
+trace_qemu_savevm_send_packaged();
+qemu_savevm_command_send(f, MIG_CMD_PACKAGED, 4, (uint8_t *)&tmp);
+
+/* all the data follows (concatinating the iov's) */
+for (cur_iov = 0; cur_iov < qsb->n_iov; cur_iov++) {
+/* The iov entries are partially filled */
+size_t towrite = (qsb->iov[cur_iov].iov_len > len) ?
+  len :
+  qsb->iov[cur_iov].iov_len;
+len -= towrite;
+
+if (!towrite) {
+break;
+}
+
+qemu_put_buffer(f, qsb->iov[cur_iov].iov_base, towrite);
+}
+
+return 0;
+}
+
 /* Send prior to any postcopy transfer */
 void qemu_savevm_send_postcopy_advise(QEMUFile *f)
 {
@@ -1199,6 +1243,48 @@ static int loadvm_process_command_simple_lencheck(const 
char *name,
 return 0;
 }
 
+/* Immediately following this command is a blob of data containing an embedded
+ * chunk of migration stream; read it and load it.
+ */
+static int loadvm_handle_cmd_packaged(MigrationIncomingState *mis,
+  uint32_t length)
+{
+int ret;
+uint8_t *buffer;
+QEMUSizedBuffer *qsb;
+
+trace_loadvm_handle_cmd_packaged(length);
+
+if (length > MAX_VM_CMD_PACKAGED_SIZE) {
+error_report("Unreasonably large packaged state: %u", length);
+return -1;
+}
+buffer = g_malloc0(length);
+ret = qemu_get_buffer(mis->file, buffer, (int)length);
+if (ret != length) {
+g_free(buffer);
+error_report("CMD_PACKAGED: Buffer receive fail ret=%d length=%d\n",
+ret, length);
+return (ret < 0) ? ret : -EAGAIN;
+}
+trace_loadvm_handle_cmd_packaged_received(ret);
+
+/* Setup a dummy QEMUFile that actually reads from the buffer */
+qsb = qsb_create(buffer, length);
+g_free(buffer); /* Because qsb_create copies */
+if (!qsb) {
+error_report("Unable to create qsb");
+}
+QEMUFile *packf = qemu_bufopen("r", qsb);
+
+ret = qemu_loadvm_state_main(packf, mis);
+trace_loadvm_handle_cmd_packaged_main(r

[Qemu-devel] [PATCH v6 17/47] ram_debug_dump_bitmap: Dump a migration bitmap as text

2015-04-14 Thread Dr. David Alan Gilbert (git)
From: "Dr. David Alan Gilbert" 

Misses out lines that are all the expected value so the output
can be quite compact depending on the circumstance.

Signed-off-by: Dr. David Alan Gilbert 
---
 arch_init.c   | 40 +++-
 include/migration/migration.h |  1 +
 2 files changed, 40 insertions(+), 1 deletion(-)

diff --git a/arch_init.c b/arch_init.c
index 3a21f0e..2b0cd18 100644
--- a/arch_init.c
+++ b/arch_init.c
@@ -833,13 +833,51 @@ static void reset_ram_globals(void)
 
 #define MAX_WAIT 50 /* ms, half buffered_file limit */
 
-
 /* Each of ram_save_setup, ram_save_iterate and ram_save_complete has
  * long-running RCU critical section.  When rcu-reclaims in the code
  * start to become numerous it will be necessary to reduce the
  * granularity of these critical sections.
  */
 
+/*
+ * 'expected' is the value you expect the bitmap mostly to be full
+ * of and it won't bother printing lines that are all this value
+ * if 'todump' is null the migration bitmap is dumped.
+ */
+void ram_debug_dump_bitmap(unsigned long *todump, bool expected)
+{
+int64_t ram_pages = last_ram_offset() >> TARGET_PAGE_BITS;
+
+int64_t cur;
+int64_t linelen = 128;
+char linebuf[129];
+
+if (!todump) {
+todump = migration_bitmap;
+}
+
+for (cur = 0; cur < ram_pages; cur += linelen) {
+int64_t curb;
+bool found = false;
+/*
+ * Last line; catch the case where the line length
+ * is longer than remaining ram
+ */
+if (cur+linelen > ram_pages) {
+linelen = ram_pages - cur;
+}
+for (curb = 0; curb < linelen; curb++) {
+bool thisbit = test_bit(cur+curb, todump);
+linebuf[curb] = thisbit ? '1' : '.';
+found = found || (thisbit != expected);
+}
+if (found) {
+linebuf[curb] = '\0';
+fprintf(stderr,  "0x%08" PRIx64 " : %s\n", cur, linebuf);
+}
+}
+}
+
 static int ram_save_setup(QEMUFile *f, void *opaque)
 {
 RAMBlock *block;
diff --git a/include/migration/migration.h b/include/migration/migration.h
index 0719d82..fb7551d 100644
--- a/include/migration/migration.h
+++ b/include/migration/migration.h
@@ -152,6 +152,7 @@ uint64_t xbzrle_mig_pages_cache_miss(void);
 double xbzrle_mig_cache_miss_rate(void);
 
 void ram_handle_compressed(void *host, uint8_t ch, uint64_t size);
+void ram_debug_dump_bitmap(unsigned long *todump, bool expected);
 
 /**
  * @migrate_add_blocker - prevent migration from proceeding
-- 
2.1.0




[Qemu-devel] [PATCH v6 23/47] migrate_init: Call from savevm

2015-04-14 Thread Dr. David Alan Gilbert (git)
From: "Dr. David Alan Gilbert" 

Suspend to file is very much like a migrate, and it makes life
easier if we have the Migration state available, so initialise it
in the savevm.c code for suspending.

Signed-off-by: Dr. David Alan Gilbert 
Reviewed-by: David Gibson 
---
 include/migration/migration.h | 3 +--
 include/qemu/typedefs.h   | 1 +
 migration/migration.c | 2 +-
 savevm.c  | 2 ++
 4 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/include/migration/migration.h b/include/migration/migration.h
index e3389dc..1b9a535 100644
--- a/include/migration/migration.h
+++ b/include/migration/migration.h
@@ -48,8 +48,6 @@ enum mig_rp_message_type {
 MIG_RP_MSG_PONG, /* Response to a PING; data (seq: be32 ) */
 };
 
-typedef struct MigrationState MigrationState;
-
 typedef QLIST_HEAD(, LoadStateEntry) LoadStateEntry_Head;
 
 typedef enum {
@@ -148,6 +146,7 @@ int migrate_fd_close(MigrationState *s);
 
 void add_migration_state_change_notifier(Notifier *notify);
 void remove_migration_state_change_notifier(Notifier *notify);
+MigrationState *migrate_init(const MigrationParams *params);
 bool migration_in_setup(MigrationState *);
 bool migration_has_finished(MigrationState *);
 bool migration_has_failed(MigrationState *);
diff --git a/include/qemu/typedefs.h b/include/qemu/typedefs.h
index 6fdcbcd..611db46 100644
--- a/include/qemu/typedefs.h
+++ b/include/qemu/typedefs.h
@@ -41,6 +41,7 @@ typedef struct MemoryRegion MemoryRegion;
 typedef struct MemoryRegionSection MemoryRegionSection;
 typedef struct MigrationIncomingState MigrationIncomingState;
 typedef struct MigrationParams MigrationParams;
+typedef struct MigrationState MigrationState;
 typedef struct Monitor Monitor;
 typedef struct MouseTransformInfo MouseTransformInfo;
 typedef struct MSIMessage MSIMessage;
diff --git a/migration/migration.c b/migration/migration.c
index b72a4c7..45284b2 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -500,7 +500,7 @@ bool migration_has_failed(MigrationState *s)
 s->state == MIGRATION_STATUS_FAILED);
 }
 
-static MigrationState *migrate_init(const MigrationParams *params)
+MigrationState *migrate_init(const MigrationParams *params)
 {
 MigrationState *s = migrate_get_current();
 int64_t bandwidth_limit = s->bandwidth_limit;
diff --git a/savevm.c b/savevm.c
index 1e940af..c281d1b 100644
--- a/savevm.c
+++ b/savevm.c
@@ -988,6 +988,8 @@ static int qemu_savevm_state(QEMUFile *f, Error **errp)
 .blk = 0,
 .shared = 0
 };
+MigrationState *ms = migrate_init(¶ms);
+ms->file = f;
 
 if (qemu_savevm_state_blocked(errp)) {
 return -EINVAL;
-- 
2.1.0




[Qemu-devel] [PATCH v6 27/47] MIGRATION_STATUS_POSTCOPY_ACTIVE: Add new migration state

2015-04-14 Thread Dr. David Alan Gilbert (git)
From: "Dr. David Alan Gilbert" 

'MIGRATION_STATUS_POSTCOPY_ACTIVE' is entered after migrate_start_postcopy

'migration_postcopy_phase' is provided for other sections to know if
they're in postcopy.

Signed-off-by: Dr. David Alan Gilbert 
Reviewed-by: David Gibson 
---
 include/migration/migration.h |  2 ++
 migration/migration.c | 56 ---
 qapi-schema.json  |  4 +++-
 trace-events  |  1 +
 4 files changed, 54 insertions(+), 9 deletions(-)

diff --git a/include/migration/migration.h b/include/migration/migration.h
index 4db9393..b9d028c 100644
--- a/include/migration/migration.h
+++ b/include/migration/migration.h
@@ -153,6 +153,8 @@ MigrationState *migrate_init(const MigrationParams *params);
 bool migration_in_setup(MigrationState *);
 bool migration_has_finished(MigrationState *);
 bool migration_has_failed(MigrationState *);
+/* True if outgoing migration has entered postcopy phase */
+bool migration_postcopy_phase(MigrationState *);
 MigrationState *migrate_get_current(void);
 
 uint64_t ram_bytes_remaining(void);
diff --git a/migration/migration.c b/migration/migration.c
index 17da8ab..d69e102 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -255,6 +255,7 @@ static bool migration_already_active(MigrationState *ms)
 {
 switch (ms->state) {
 case MIGRATION_STATUS_ACTIVE:
+case MIGRATION_STATUS_POSTCOPY_ACTIVE:
 case MIGRATION_STATUS_SETUP:
 return true;
 
@@ -325,6 +326,39 @@ MigrationInfo *qmp_query_migrate(Error **errp)
 
 get_xbzrle_cache_stats(info);
 break;
+case MIGRATION_STATUS_POSTCOPY_ACTIVE:
+/* Mostly the same as active; TODO add some postcopy stats */
+info->has_status = true;
+info->has_total_time = true;
+info->total_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME)
+- s->total_time;
+info->has_expected_downtime = true;
+info->expected_downtime = s->expected_downtime;
+info->has_setup_time = true;
+info->setup_time = s->setup_time;
+
+info->has_ram = true;
+info->ram = g_malloc0(sizeof(*info->ram));
+info->ram->transferred = ram_bytes_transferred();
+info->ram->remaining = ram_bytes_remaining();
+info->ram->total = ram_bytes_total();
+info->ram->duplicate = dup_mig_pages_transferred();
+info->ram->skipped = skipped_mig_pages_transferred();
+info->ram->normal = norm_mig_pages_transferred();
+info->ram->normal_bytes = norm_mig_bytes_transferred();
+info->ram->dirty_pages_rate = s->dirty_pages_rate;
+info->ram->mbps = s->mbps;
+
+if (blk_mig_active()) {
+info->has_disk = true;
+info->disk = g_malloc0(sizeof(*info->disk));
+info->disk->transferred = blk_mig_bytes_transferred();
+info->disk->remaining = blk_mig_bytes_remaining();
+info->disk->total = blk_mig_bytes_total();
+}
+
+get_xbzrle_cache_stats(info);
+break;
 case MIGRATION_STATUS_COMPLETED:
 get_xbzrle_cache_stats(info);
 
@@ -366,8 +400,7 @@ void 
qmp_migrate_set_capabilities(MigrationCapabilityStatusList *params,
 MigrationState *s = migrate_get_current();
 MigrationCapabilityStatusList *cap;
 
-if (s->state == MIGRATION_STATUS_ACTIVE ||
-s->state == MIGRATION_STATUS_SETUP) {
+if (migration_already_active(s)) {
 error_set(errp, QERR_MIGRATION_ACTIVE);
 return;
 }
@@ -442,7 +475,8 @@ static void migrate_fd_cleanup(void *opaque)
 s->file = NULL;
 }
 
-assert(s->state != MIGRATION_STATUS_ACTIVE);
+assert((s->state != MIGRATION_STATUS_ACTIVE) &&
+   (s->state != MIGRATION_STATUS_POSTCOPY_ACTIVE));
 
 if (s->state != MIGRATION_STATUS_COMPLETED) {
 qemu_savevm_state_cancel();
@@ -477,8 +511,7 @@ static void migrate_fd_cancel(MigrationState *s)
 
 do {
 old_state = s->state;
-if (old_state != MIGRATION_STATUS_SETUP &&
-old_state != MIGRATION_STATUS_ACTIVE) {
+if (!migration_already_active(s)) {
 break;
 }
 migrate_set_state(s, old_state, MIGRATION_STATUS_CANCELLING);
@@ -522,6 +555,11 @@ bool migration_has_failed(MigrationState *s)
 s->state == MIGRATION_STATUS_FAILED);
 }
 
+bool migration_postcopy_phase(MigrationState *s)
+{
+return (s->state == MIGRATION_STATUS_POSTCOPY_ACTIVE);
+}
+
 MigrationState *migrate_init(const MigrationParams *params)
 {
 MigrationState *s = migrate_get_current();
@@ -593,8 +631,7 @@ void qmp_migrate(const char *uri, bool has_blk, bool blk,
 params.blk = has_blk && blk;
 params.shared = has_inc && inc;
 
-if (s->state == MIGRATION_STATUS_ACTIVE ||
-s->state == MIGRATION_STATUS_SETUP ||
+if (migration_already_active(s) ||
 s->state == MIGRATION_STATUS_CANCELLING) {
 error_set(errp, QERR_MIGRATION_ACTIVE);
  

[Qemu-devel] [PATCH v6 12/47] Return path: socket_writev_buffer: Block even on non-blocking fd's

2015-04-14 Thread Dr. David Alan Gilbert (git)
From: "Dr. David Alan Gilbert" 

The destination sets the fd to non-blocking on incoming migrations;
this also affects the return path from the destination, and thus we
need to make sure we can safely write to the return path.

Signed-off-by: Dr. David Alan Gilbert 
---
 migration/qemu-file-unix.c | 41 -
 1 file changed, 36 insertions(+), 5 deletions(-)

diff --git a/migration/qemu-file-unix.c b/migration/qemu-file-unix.c
index 1e7de7b..6b024e5 100644
--- a/migration/qemu-file-unix.c
+++ b/migration/qemu-file-unix.c
@@ -39,12 +39,43 @@ static ssize_t socket_writev_buffer(void *opaque, struct 
iovec *iov, int iovcnt,
 QEMUFileSocket *s = opaque;
 ssize_t len;
 ssize_t size = iov_size(iov, iovcnt);
+ssize_t offset = 0;
+int err;
 
-len = iov_send(s->fd, iov, iovcnt, 0, size);
-if (len < size) {
-len = -socket_error();
-}
-return len;
+while (size > 0) {
+len = iov_send(s->fd, iov, iovcnt, offset, size);
+
+if (len > 0) {
+size -= len;
+offset += len;
+}
+
+if (size > 0) {
+err = socket_error();
+
+if (err != EAGAIN && err != EWOULDBLOCK) {
+error_report("socket_writev_buffer: Got err=%d for (%zd/%zd)",
+ err, size, len);
+/*
+ * If I've already sent some but only just got the error, I
+ * could return the amount validly sent so far and wait for the
+ * next call to report the error, but I'd rather flag the error
+ * immediately.
+ */
+return -err;
+}
+
+/* Emulate blocking */
+GPollFD pfd;
+
+pfd.fd = s->fd;
+pfd.events = G_IO_OUT | G_IO_ERR;
+pfd.revents = 0;
+g_poll(&pfd, 1 /* 1 fd */, -1 /* no timeout */);
+}
+ }
+
+return offset;
 }
 
 static int socket_get_fd(void *opaque)
-- 
2.1.0




[Qemu-devel] [PATCH v6 20/47] Add migration-capability boolean for postcopy-ram.

2015-04-14 Thread Dr. David Alan Gilbert (git)
From: "Dr. David Alan Gilbert" 

The 'postcopy ram' capability allows postcopy migration of RAM;
note that the migration starts off in precopy mode until
postcopy mode is triggered (see the migrate_start_postcopy
patch later in the series).

Signed-off-by: Dr. David Alan Gilbert 
Reviewed-by: Eric Blake 
---
 include/migration/migration.h | 1 +
 migration/migration.c | 9 +
 qapi-schema.json  | 7 ++-
 3 files changed, 16 insertions(+), 1 deletion(-)

diff --git a/include/migration/migration.h b/include/migration/migration.h
index ae85958..5858788 100644
--- a/include/migration/migration.h
+++ b/include/migration/migration.h
@@ -179,6 +179,7 @@ void migrate_add_blocker(Error *reason);
  */
 void migrate_del_blocker(Error *reason);
 
+bool migrate_postcopy_ram(void);
 bool migrate_zero_blocks(void);
 
 bool migrate_auto_converge(void);
diff --git a/migration/migration.c b/migration/migration.c
index 01ed1d0..f641fc7 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -685,6 +685,15 @@ void qmp_migrate_set_downtime(double value, Error **errp)
 max_downtime = (uint64_t)value;
 }
 
+bool migrate_postcopy_ram(void)
+{
+MigrationState *s;
+
+s = migrate_get_current();
+
+return s->enabled_capabilities[MIGRATION_CAPABILITY_X_POSTCOPY_RAM];
+}
+
 bool migrate_auto_converge(void)
 {
 MigrationState *s;
diff --git a/qapi-schema.json b/qapi-schema.json
index ac9594d..dcd3e62 100644
--- a/qapi-schema.json
+++ b/qapi-schema.json
@@ -518,10 +518,15 @@
 # @auto-converge: If enabled, QEMU will automatically throttle down the guest
 #  to speed up convergence of RAM migration. (since 1.6)
 #
+# @x-postcopy-ram: Start executing on the migration target before all of RAM 
has
+#  been migrated, pulling the remaining pages along as needed. NOTE: If
+#  the migration fails during postcopy the VM will fail.  (since 2.4)
+#
 # Since: 1.2
 ##
 { 'enum': 'MigrationCapability',
-  'data': ['xbzrle', 'rdma-pin-all', 'auto-converge', 'zero-blocks'] }
+  'data': ['xbzrle', 'rdma-pin-all', 'auto-converge', 'zero-blocks',
+   'x-postcopy-ram'] }
 
 ##
 # @MigrationCapabilityStatus
-- 
2.1.0




[Qemu-devel] [PATCH v6 19/47] Rework loadvm path for subloops

2015-04-14 Thread Dr. David Alan Gilbert (git)
From: "Dr. David Alan Gilbert" 

Postcopy needs to have two migration streams loading concurrently;
one from memory (with the device state) and the other from the fd
with the memory transactions.

Split the core of qemu_loadvm_state out so we can use it for both.

Allow the inner loadvm loop to quit and cause the parent loops to
exit as well.

Signed-off-by: Dr. David Alan Gilbert 
---
 include/migration/migration.h |   6 ++
 migration/migration.c |   2 +
 savevm.c  | 125 +++---
 trace-events  |   4 ++
 4 files changed, 81 insertions(+), 56 deletions(-)

diff --git a/include/migration/migration.h b/include/migration/migration.h
index 92a6068..ae85958 100644
--- a/include/migration/migration.h
+++ b/include/migration/migration.h
@@ -59,6 +59,12 @@ struct MigrationIncomingState {
 /* See savevm.c */
 LoadStateEntry_Head loadvm_handlers;
 
+/*
+ * Free at the start of the main state load, set as the main thread 
finishes
+ * loading state.
+ */
+QemuEvent  main_thread_load_event;
+
 QEMUFile *return_path;
 QemuMutex  rp_mutex;/* We send replies from multiple threads */
 };
diff --git a/migration/migration.c b/migration/migration.c
index bcad9a4..01ed1d0 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -72,12 +72,14 @@ MigrationIncomingState 
*migration_incoming_state_new(QEMUFile* f)
 mis_current->file = f;
 QLIST_INIT(&mis_current->loadvm_handlers);
 qemu_mutex_init(&mis_current->rp_mutex);
+qemu_event_init(&mis_current->main_thread_load_event, false);
 
 return mis_current;
 }
 
 void migration_incoming_state_destroy(void)
 {
+qemu_event_destroy(&mis_current->main_thread_load_event);
 loadvm_free_handlers(mis_current);
 g_free(mis_current);
 mis_current = NULL;
diff --git a/savevm.c b/savevm.c
index ef174d7..e7d42dc 100644
--- a/savevm.c
+++ b/savevm.c
@@ -959,6 +959,13 @@ static SaveStateEntry *find_se(const char *idstr, int 
instance_id)
 return NULL;
 }
 
+enum LoadVMExitCodes {
+/* Allow a command to quit all layers of nested loadvm loops */
+LOADVM_QUIT =  1,
+};
+
+static int qemu_loadvm_state_main(QEMUFile *f, MigrationIncomingState *mis);
+
 static int loadvm_process_command_simple_lencheck(const char *name,
   unsigned int actual,
   unsigned int expected)
@@ -974,7 +981,9 @@ static int loadvm_process_command_simple_lencheck(const 
char *name,
 
 /*
  * Process an incoming 'QEMU_VM_COMMAND'
- * negative return on error (will issue error message)
+ * 0   just a normal return
+ * LOADVM_QUIT All good, but exit the loop
+ * <0  Error
  */
 static int loadvm_process_command(QEMUFile *f)
 {
@@ -1044,36 +1053,12 @@ void loadvm_free_handlers(MigrationIncomingState *mis)
 }
 }
 
-int qemu_loadvm_state(QEMUFile *f)
+static int qemu_loadvm_state_main(QEMUFile *f, MigrationIncomingState *mis)
 {
-MigrationIncomingState *mis = migration_incoming_get_current();
-Error *local_err = NULL;
 uint8_t section_type;
-unsigned int v;
 int ret;
-int file_error_after_eof = -1;
-
-if (qemu_savevm_state_blocked(&local_err)) {
-error_report_err(local_err);
-return -EINVAL;
-}
-
-v = qemu_get_be32(f);
-if (v != QEMU_VM_FILE_MAGIC) {
-error_report("Not a migration stream");
-return -EINVAL;
-}
-
-v = qemu_get_be32(f);
-if (v == QEMU_VM_FILE_VERSION_COMPAT) {
-error_report("SaveVM v2 format is obsolete and don't work anymore");
-return -ENOTSUP;
-}
-if (v != QEMU_VM_FILE_VERSION) {
-error_report("Unsupported migration stream version");
-return -ENOTSUP;
-}
 
+trace_qemu_loadvm_state_main();
 while ((section_type = qemu_get_byte(f)) != QEMU_VM_EOF) {
 uint32_t instance_id, version_id, section_id;
 SaveStateEntry *se;
@@ -1101,16 +1086,14 @@ int qemu_loadvm_state(QEMUFile *f)
 if (se == NULL) {
 error_report("Unknown savevm section or instance '%s' %d",
  idstr, instance_id);
-ret = -EINVAL;
-goto out;
+return -EINVAL;
 }
 
 /* Validate version */
 if (version_id > se->version_id) {
 error_report("savevm: unsupported version %d for '%s' v%d",
  version_id, idstr, se->version_id);
-ret = -EINVAL;
-goto out;
+return -EINVAL;
 }
 
 /* Add entry */
@@ -1125,7 +1108,7 @@ int qemu_loadvm_state(QEMUFile *f)
 if (ret < 0) {
 error_report("error while loading state for instance 0x%x of"
  " device '%s'", instance_id, idstr);
-goto out;
+return 

[Qemu-devel] [PATCH v6 16/47] Return path: Source handling of return path

2015-04-14 Thread Dr. David Alan Gilbert (git)
From: "Dr. David Alan Gilbert" 

Open a return path, and handle messages that are received upon it.

Signed-off-by: Dr. David Alan Gilbert 
---
 include/migration/migration.h |   8 ++
 migration/migration.c | 177 +-
 trace-events  |  12 +++
 3 files changed, 196 insertions(+), 1 deletion(-)

diff --git a/include/migration/migration.h b/include/migration/migration.h
index 6300ec1..0719d82 100644
--- a/include/migration/migration.h
+++ b/include/migration/migration.h
@@ -73,6 +73,14 @@ struct MigrationState
 
 int state;
 MigrationParams params;
+
+/* State related to return path */
+struct {
+QEMUFile *file;
+QemuThreadrp_thread;
+bool  error;
+} rp_state;
+
 double mbps;
 int64_t total_time;
 int64_t downtime;
diff --git a/migration/migration.c b/migration/migration.c
index db9471d..88355e2 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -243,6 +243,23 @@ MigrationCapabilityStatusList 
*qmp_query_migrate_capabilities(Error **errp)
 return head;
 }
 
+/*
+ * Return true if we're already in the middle of a migration
+ * (i.e. any of the active or setup states)
+ */
+static bool migration_already_active(MigrationState *ms)
+{
+switch (ms->state) {
+case MIGRATION_STATUS_ACTIVE:
+case MIGRATION_STATUS_SETUP:
+return true;
+
+default:
+return false;
+
+}
+}
+
 static void get_xbzrle_cache_stats(MigrationInfo *info)
 {
 if (migrate_use_xbzrle()) {
@@ -365,6 +382,21 @@ static void migrate_set_state(MigrationState *s, int 
old_state, int new_state)
 }
 }
 
+static void migrate_fd_cleanup_src_rp(MigrationState *ms)
+{
+QEMUFile *rp = ms->rp_state.file;
+
+/*
+ * When stuff goes wrong (e.g. failing destination) on the rp, it can get
+ * cleaned up from a few threads; make sure not to do it twice in parallel
+ */
+rp = atomic_cmpxchg(&ms->rp_state.file, rp, NULL);
+if (rp) {
+trace_migrate_fd_cleanup_src_rp();
+qemu_fclose(rp);
+}
+}
+
 static void migrate_fd_cleanup(void *opaque)
 {
 MigrationState *s = opaque;
@@ -372,6 +404,8 @@ static void migrate_fd_cleanup(void *opaque)
 qemu_bh_delete(s->cleanup_bh);
 s->cleanup_bh = NULL;
 
+migrate_fd_cleanup_src_rp(s);
+
 if (s->file) {
 trace_migrate_fd_cleanup();
 qemu_mutex_unlock_iothread();
@@ -410,6 +444,11 @@ static void migrate_fd_cancel(MigrationState *s)
 QEMUFile *f = migrate_get_current()->file;
 trace_migrate_fd_cancel();
 
+if (s->rp_state.file) {
+/* shutdown the rp socket, so causing the rp thread to shutdown */
+qemu_file_shutdown(s->rp_state.file);
+}
+
 do {
 old_state = s->state;
 if (old_state != MIGRATION_STATUS_SETUP &&
@@ -678,8 +717,144 @@ int64_t migrate_xbzrle_cache_size(void)
 return s->xbzrle_cache_size;
 }
 
-/* migration thread support */
+/*
+ * Something bad happened to the RP stream, mark an error
+ * The caller shall print something to indicate why
+ */
+static void source_return_path_bad(MigrationState *s)
+{
+s->rp_state.error = true;
+migrate_fd_cleanup_src_rp(s);
+}
+
+/*
+ * Handles messages sent on the return path towards the source VM
+ *
+ */
+static void *source_return_path_thread(void *opaque)
+{
+MigrationState *ms = opaque;
+QEMUFile *rp = ms->rp_state.file;
+uint16_t expected_len, header_len, header_type;
+const int max_len = 512;
+uint8_t buf[max_len];
+uint32_t tmp32;
+int res;
+
+trace_source_return_path_thread_entry();
+while (rp && !qemu_file_get_error(rp) &&
+migration_already_active(ms)) {
+trace_source_return_path_thread_loop_top();
+header_type = qemu_get_be16(rp);
+header_len = qemu_get_be16(rp);
+
+switch (header_type) {
+case MIG_RP_MSG_SHUT:
+case MIG_RP_MSG_PONG:
+expected_len = 4;
+break;
+
+default:
+error_report("RP: Received invalid message 0x%04x length 0x%04x",
+header_type, header_len);
+source_return_path_bad(ms);
+goto out;
+}
+
+if (header_len > expected_len) {
+error_report("RP: Received message 0x%04x with"
+"incorrect length %d expecting %d",
+header_type, header_len,
+expected_len);
+source_return_path_bad(ms);
+goto out;
+}
+
+/* We know we've got a valid header by this point */
+res = qemu_get_buffer(rp, buf, header_len);
+if (res != header_len) {
+trace_source_return_path_thread_failed_read_cmd_data();
+source_return_path_bad(ms);
+goto out;
+}
+
+/* OK, we have the message and the data */
+switch (header_type) {
+case MIG_RP_MSG_SHUT:
+tmp32 = be32_to_cpup

[Qemu-devel] [PATCH v6 10/47] Rename save_live_complete to save_live_complete_precopy

2015-04-14 Thread Dr. David Alan Gilbert (git)
From: "Dr. David Alan Gilbert" 

In postcopy we're going to need to perform the complete phase
for postcopiable devices at a different point, start out by
renaming all of the 'complete's to make the difference obvious.

Signed-off-by: Dr. David Alan Gilbert 
---
 arch_init.c |  2 +-
 hw/ppc/spapr.c  |  2 +-
 include/migration/vmstate.h |  2 +-
 include/sysemu/sysemu.h |  2 +-
 migration/block.c   |  2 +-
 migration/migration.c   |  2 +-
 savevm.c| 10 +-
 trace-events|  2 +-
 8 files changed, 12 insertions(+), 12 deletions(-)

diff --git a/arch_init.c b/arch_init.c
index 06722bb..3a21f0e 100644
--- a/arch_init.c
+++ b/arch_init.c
@@ -1233,7 +1233,7 @@ static int ram_load(QEMUFile *f, void *opaque, int 
version_id)
 static SaveVMHandlers savevm_ram_handlers = {
 .save_live_setup = ram_save_setup,
 .save_live_iterate = ram_save_iterate,
-.save_live_complete = ram_save_complete,
+.save_live_complete_precopy = ram_save_complete,
 .save_live_pending = ram_save_pending,
 .load_state = ram_load,
 .cancel = ram_migration_cancel,
diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
index 61ddc79..20a1187 100644
--- a/hw/ppc/spapr.c
+++ b/hw/ppc/spapr.c
@@ -1373,7 +1373,7 @@ static int htab_load(QEMUFile *f, void *opaque, int 
version_id)
 static SaveVMHandlers savevm_htab_handlers = {
 .save_live_setup = htab_save_setup,
 .save_live_iterate = htab_save_iterate,
-.save_live_complete = htab_save_complete,
+.save_live_complete_precopy = htab_save_complete,
 .load_state = htab_load,
 };
 
diff --git a/include/migration/vmstate.h b/include/migration/vmstate.h
index bc7616a..55cd174 100644
--- a/include/migration/vmstate.h
+++ b/include/migration/vmstate.h
@@ -40,7 +40,7 @@ typedef struct SaveVMHandlers {
 SaveStateHandler *save_state;
 
 void (*cancel)(void *opaque);
-int (*save_live_complete)(QEMUFile *f, void *opaque);
+int (*save_live_complete_precopy)(QEMUFile *f, void *opaque);
 
 /* This runs both outside and inside the iothread lock.  */
 bool (*is_active)(void *opaque);
diff --git a/include/sysemu/sysemu.h b/include/sysemu/sysemu.h
index bd67f86..8402e6e 100644
--- a/include/sysemu/sysemu.h
+++ b/include/sysemu/sysemu.h
@@ -87,7 +87,7 @@ void qemu_savevm_state_begin(QEMUFile *f,
  const MigrationParams *params);
 void qemu_savevm_state_header(QEMUFile *f);
 int qemu_savevm_state_iterate(QEMUFile *f);
-void qemu_savevm_state_complete(QEMUFile *f);
+void qemu_savevm_state_complete_precopy(QEMUFile *f);
 void qemu_savevm_state_cancel(void);
 uint64_t qemu_savevm_state_pending(QEMUFile *f, uint64_t max_size);
 int qemu_loadvm_state(QEMUFile *f);
diff --git a/migration/block.c b/migration/block.c
index 085c0fa..00f4998 100644
--- a/migration/block.c
+++ b/migration/block.c
@@ -878,7 +878,7 @@ static SaveVMHandlers savevm_block_handlers = {
 .set_params = block_set_params,
 .save_live_setup = block_save_setup,
 .save_live_iterate = block_save_iterate,
-.save_live_complete = block_save_complete,
+.save_live_complete_precopy = block_save_complete,
 .save_live_pending = block_save_pending,
 .load_state = block_load,
 .cancel = block_migration_cancel,
diff --git a/migration/migration.c b/migration/migration.c
index ce488cf..872d1e1 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -671,7 +671,7 @@ static void *migration_thread(void *opaque)
 ret = vm_stop_force_state(RUN_STATE_FINISH_MIGRATE);
 if (ret >= 0) {
 qemu_file_set_rate_limit(s->file, INT64_MAX);
-qemu_savevm_state_complete(s->file);
+qemu_savevm_state_complete_precopy(s->file);
 }
 qemu_mutex_unlock_iothread();
 
diff --git a/savevm.c b/savevm.c
index 81f6a29..eba9174 100644
--- a/savevm.c
+++ b/savevm.c
@@ -720,19 +720,19 @@ static bool should_send_vmdesc(void)
 return !machine->suppress_vmdesc;
 }
 
-void qemu_savevm_state_complete(QEMUFile *f)
+void qemu_savevm_state_complete_precopy(QEMUFile *f)
 {
 QJSON *vmdesc;
 int vmdesc_len;
 SaveStateEntry *se;
 int ret;
 
-trace_savevm_state_complete();
+trace_savevm_state_complete_precopy();
 
 cpu_synchronize_all_states();
 
 QTAILQ_FOREACH(se, &savevm_handlers, entry) {
-if (!se->ops || !se->ops->save_live_complete) {
+if (!se->ops || !se->ops->save_live_complete_precopy) {
 continue;
 }
 if (se->ops && se->ops->is_active) {
@@ -745,7 +745,7 @@ void qemu_savevm_state_complete(QEMUFile *f)
 qemu_put_byte(f, QEMU_VM_SECTION_END);
 qemu_put_be32(f, se->section_id);
 
-ret = se->ops->save_live_complete(f, se->opaque);
+ret = se->ops->save_live_complete_precopy(f, se->opaque);
 trace_savevm_section_end(se->idstr, se->section_id, ret);
 if (ret < 0)

[Qemu-devel] [PATCH v6 15/47] Return path: Send responses from destination to source

2015-04-14 Thread Dr. David Alan Gilbert (git)
From: "Dr. David Alan Gilbert" 

Add migrate_send_rp_message to send a message from destination to source along 
the return path.
  (It uses a mutex to let it be called from multiple threads)
Add migrate_send_rp_shut to send a 'shut' message to indicate
  the destination is finished with the RP.
Add migrate_send_rp_ack to send a 'PONG' message in response to a PING
  Use it in the MSG_RP_PING handler

Signed-off-by: Dr. David Alan Gilbert 
---
 include/migration/migration.h | 17 
 migration/migration.c | 45 +++
 savevm.c  |  2 +-
 trace-events  |  1 +
 4 files changed, 64 insertions(+), 1 deletion(-)

diff --git a/include/migration/migration.h b/include/migration/migration.h
index e2e251d..6300ec1 100644
--- a/include/migration/migration.h
+++ b/include/migration/migration.h
@@ -41,6 +41,13 @@ struct MigrationParams {
 bool shared;
 };
 
+/* Messages sent on the return path from destination to source */
+enum mig_rp_message_type {
+MIG_RP_MSG_INVALID = 0,  /* Must be 0 */
+MIG_RP_MSG_SHUT, /* sibling will not send any more RP messages */
+MIG_RP_MSG_PONG, /* Response to a PING; data (seq: be32 ) */
+};
+
 typedef struct MigrationState MigrationState;
 
 /* State for the incoming migration */
@@ -48,6 +55,7 @@ struct MigrationIncomingState {
 QEMUFile *file;
 
 QEMUFile *return_path;
+QemuMutex  rp_mutex;/* We send replies from multiple threads */
 };
 
 MigrationIncomingState *migration_incoming_get_current(void);
@@ -164,6 +172,15 @@ int64_t migrate_xbzrle_cache_size(void);
 
 int64_t xbzrle_cache_resize(int64_t new_size);
 
+/* Sending on the return path - generic and then for each message type */
+void migrate_send_rp_message(MigrationIncomingState *mis,
+ enum mig_rp_message_type message_type,
+ uint16_t len, void *data);
+void migrate_send_rp_shut(MigrationIncomingState *mis,
+  uint32_t value);
+void migrate_send_rp_pong(MigrationIncomingState *mis,
+  uint32_t value);
+
 void ram_control_before_iterate(QEMUFile *f, uint64_t flags);
 void ram_control_after_iterate(QEMUFile *f, uint64_t flags);
 void ram_control_load_hook(QEMUFile *f, uint64_t flags);
diff --git a/migration/migration.c b/migration/migration.c
index 872d1e1..db9471d 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -70,6 +70,7 @@ MigrationIncomingState 
*migration_incoming_state_new(QEMUFile* f)
 {
 mis_current = g_malloc0(sizeof(MigrationIncomingState));
 mis_current->file = f;
+qemu_mutex_init(&mis_current->rp_mutex);
 
 return mis_current;
 }
@@ -162,6 +163,50 @@ void process_incoming_migration(QEMUFile *f)
 qemu_coroutine_enter(co, f);
 }
 
+/*
+ * Send a message on the return channel back to the source
+ * of the migration.
+ */
+void migrate_send_rp_message(MigrationIncomingState *mis,
+ enum mig_rp_message_type message_type,
+ uint16_t len, void *data)
+{
+trace_migrate_send_rp_message((int)message_type, len);
+qemu_mutex_lock(&mis->rp_mutex);
+qemu_put_be16(mis->return_path, (unsigned int)message_type);
+qemu_put_be16(mis->return_path, len);
+qemu_put_buffer(mis->return_path, data, len);
+qemu_fflush(mis->return_path);
+qemu_mutex_unlock(&mis->rp_mutex);
+}
+
+/*
+ * Send a 'SHUT' message on the return channel with the given value
+ * to indicate that we've finished with the RP.  None-0 value indicates
+ * error.
+ */
+void migrate_send_rp_shut(MigrationIncomingState *mis,
+  uint32_t value)
+{
+uint32_t buf;
+
+buf = cpu_to_be32(value);
+migrate_send_rp_message(mis, MIG_RP_MSG_SHUT, sizeof(buf), &buf);
+}
+
+/*
+ * Send a 'PONG' message on the return channel with the given value
+ * (normally in response to a 'PING')
+ */
+void migrate_send_rp_pong(MigrationIncomingState *mis,
+  uint32_t value)
+{
+uint32_t buf;
+
+buf = cpu_to_be32(value);
+migrate_send_rp_message(mis, MIG_RP_MSG_PONG, sizeof(buf), &buf);
+}
+
 /* amount of nanoseconds we are willing to wait for migration to be down.
  * the choice of nanoseconds is because it is the maximum resolution that
  * get_clock() can achieve. It is an internal measure. All user-visible
diff --git a/savevm.c b/savevm.c
index 4dc8f06..f6b8b90 100644
--- a/savevm.c
+++ b/savevm.c
@@ -1016,7 +1016,7 @@ static int loadvm_process_command(QEMUFile *f)
  tmp32);
 return -1;
 }
-/* migrate_send_rp_pong(mis, tmp32); TODO: gets added later */
+migrate_send_rp_pong(mis, tmp32);
 break;
 
 default:
diff --git a/trace-events b/trace-events
index 0f74836..9f0a071 100644
--- a/trace-events
+++ b/trace-events
@@ -1383,6 +1383,7 @@ migrate_fd_cleanup(void) ""
 migrate_fd_error(void) ""
 mi

[Qemu-devel] [PATCH v6 14/47] Return path: Control commands

2015-04-14 Thread Dr. David Alan Gilbert (git)
From: "Dr. David Alan Gilbert" 

Add two src->dest commands:
   * OPEN_RETURN_PATH - To request that the destination open the return path
   * PING - Request an acknowledge from the destination

Signed-off-by: Dr. David Alan Gilbert 
Reviewed-by: David Gibson 
---
 include/migration/migration.h |  2 ++
 include/sysemu/sysemu.h   |  6 -
 savevm.c  | 59 +++
 trace-events  |  2 ++
 4 files changed, 68 insertions(+), 1 deletion(-)

diff --git a/include/migration/migration.h b/include/migration/migration.h
index f221c99..e2e251d 100644
--- a/include/migration/migration.h
+++ b/include/migration/migration.h
@@ -46,6 +46,8 @@ typedef struct MigrationState MigrationState;
 /* State for the incoming migration */
 struct MigrationIncomingState {
 QEMUFile *file;
+
+QEMUFile *return_path;
 };
 
 MigrationIncomingState *migration_incoming_get_current(void);
diff --git a/include/sysemu/sysemu.h b/include/sysemu/sysemu.h
index e82b205..49ba134 100644
--- a/include/sysemu/sysemu.h
+++ b/include/sysemu/sysemu.h
@@ -84,7 +84,9 @@ void qemu_announce_self(void);
 
 /* Subcommands for QEMU_VM_COMMAND */
 enum qemu_vm_cmd {
-MIG_CMD_INVALID = 0,   /* Must be 0 */
+MIG_CMD_INVALID = 0,   /* Must be 0 */
+MIG_CMD_OPEN_RETURN_PATH,  /* Tell the dest to open the Return path */
+MIG_CMD_PING,  /* Request a PONG on the RP */
 };
 
 bool qemu_savevm_state_blocked(Error **errp);
@@ -97,6 +99,8 @@ void qemu_savevm_state_cancel(void);
 uint64_t qemu_savevm_state_pending(QEMUFile *f, uint64_t max_size);
 void qemu_savevm_command_send(QEMUFile *f, enum qemu_vm_cmd command,
   uint16_t len, uint8_t *data);
+void qemu_savevm_send_ping(QEMUFile *f, uint32_t value);
+void qemu_savevm_send_open_return_path(QEMUFile *f);
 int qemu_loadvm_state(QEMUFile *f);
 
 typedef enum DisplayType
diff --git a/savevm.c b/savevm.c
index 2dc5fbb..4dc8f06 100644
--- a/savevm.c
+++ b/savevm.c
@@ -620,6 +620,20 @@ void qemu_savevm_command_send(QEMUFile *f,
 qemu_fflush(f);
 }
 
+void qemu_savevm_send_ping(QEMUFile *f, uint32_t value)
+{
+uint32_t buf;
+
+trace_savevm_send_ping(value);
+buf = cpu_to_be32(value);
+qemu_savevm_command_send(f, MIG_CMD_PING, 4, (uint8_t *)&buf);
+}
+
+void qemu_savevm_send_open_return_path(QEMUFile *f)
+{
+qemu_savevm_command_send(f, MIG_CMD_OPEN_RETURN_PATH, 0, NULL);
+}
+
 bool qemu_savevm_state_blocked(Error **errp)
 {
 SaveStateEntry *se;
@@ -945,20 +959,65 @@ static SaveStateEntry *find_se(const char *idstr, int 
instance_id)
 return NULL;
 }
 
+static int loadvm_process_command_simple_lencheck(const char *name,
+  unsigned int actual,
+  unsigned int expected)
+{
+if (actual != expected) {
+error_report("%s received with bad length - expecting %d, got %d",
+ name, expected, actual);
+return -1;
+}
+
+return 0;
+}
+
 /*
  * Process an incoming 'QEMU_VM_COMMAND'
  * negative return on error (will issue error message)
  */
 static int loadvm_process_command(QEMUFile *f)
 {
+MigrationIncomingState *mis = migration_incoming_get_current();
 uint16_t com;
 uint16_t len;
+uint32_t tmp32;
 
 com = qemu_get_be16(f);
 len = qemu_get_be16(f);
 
 trace_loadvm_process_command(com, len);
 switch (com) {
+case MIG_CMD_OPEN_RETURN_PATH:
+if (loadvm_process_command_simple_lencheck("CMD_OPEN_RETURN_PATH",
+   len, 0)) {
+return -1;
+}
+if (mis->return_path) {
+error_report("CMD_OPEN_RETURN_PATH called when RP already open");
+/* Not really a problem, so don't give up */
+return 0;
+}
+mis->return_path = qemu_file_get_return_path(f);
+if (!mis->return_path) {
+error_report("CMD_OPEN_RETURN_PATH failed");
+return -1;
+}
+break;
+
+case MIG_CMD_PING:
+if (loadvm_process_command_simple_lencheck("CMD_PING", len, 4)) {
+return -1;
+}
+tmp32 = qemu_get_be32(f);
+trace_loadvm_process_command_ping(tmp32);
+if (!mis->return_path) {
+error_report("CMD_PING (0x%x) received with no return path",
+ tmp32);
+return -1;
+}
+/* migrate_send_rp_pong(mis, tmp32); TODO: gets added later */
+break;
 
 default:
 error_report("VM_COMMAND 0x%x unknown (len 0x%x)", com, len);
diff --git a/trace-events b/trace-events
index 4d093dc..0f74836 100644
--- a/trace-events
+++ b/trace-events
@@ -1172,8 +1172,10 @@ qemu_loadvm_state_section(unsigned int section_type) "%d"
 qemu_loadvm_state_section_partend(uint32_t section_id) "%u"
 qemu_loadvm_state_section_startfull(uint32_t section_id, const char *idstr

[Qemu-devel] [PATCH v6 08/47] Add qemu_get_buffer_less_copy to avoid copies some of the time

2015-04-14 Thread Dr. David Alan Gilbert (git)
From: "Dr. David Alan Gilbert" 

qemu_get_buffer always copies the data it reads to a users buffer,
however in many cases the file buffer inside qemu_file could be given
back to the caller, avoiding the copy.  This isn't always possible
depending on the size and alignment of the data.

Thus 'qemu_get_buffer_less_copy' either copies the data to a supplied
buffer or updates a pointer to the internal buffer if convenient.

Signed-off-by: Dr. David Alan Gilbert 
---
 include/migration/qemu-file.h |  2 ++
 migration/qemu-file.c | 45 +++
 2 files changed, 47 insertions(+)

diff --git a/include/migration/qemu-file.h b/include/migration/qemu-file.h
index 3fe545e..4cac58f 100644
--- a/include/migration/qemu-file.h
+++ b/include/migration/qemu-file.h
@@ -159,6 +159,8 @@ void qemu_put_be32(QEMUFile *f, unsigned int v);
 void qemu_put_be64(QEMUFile *f, uint64_t v);
 int qemu_peek_buffer(QEMUFile *f, uint8_t **buf, int size, size_t offset);
 int qemu_get_buffer(QEMUFile *f, uint8_t *buf, int size);
+int qemu_get_buffer_less_copy(QEMUFile *f, uint8_t **buf, int size);
+
 /*
  * Note that you can only peek continuous bytes from where the current pointer
  * is; you aren't guaranteed to be able to peak to +n bytes unless you've
diff --git a/migration/qemu-file.c b/migration/qemu-file.c
index 8dc5767..ec3a598 100644
--- a/migration/qemu-file.c
+++ b/migration/qemu-file.c
@@ -426,6 +426,51 @@ int qemu_get_buffer(QEMUFile *f, uint8_t *buf, int size)
 }
 
 /*
+ * Read 'size' bytes of data from the file.
+ * 'size' can be larger than the internal buffer.
+ *
+ * The data:
+ *   may be held on an internal buffer (in which case *buf is updated
+ * to point to it) that is valid until the next qemu_file operation.
+ * OR
+ *   will be copied to the *buf that was passed in.
+ *
+ * The code tries to avoid the copy if possible.
+ *
+ * It will return size bytes unless there was an error, in which case it will
+ * return as many as it managed to read (assuming blocking fd's which
+ * all current QEMUFile are)
+ */
+int qemu_get_buffer_less_copy(QEMUFile *f, uint8_t **buf, int size)
+{
+int pending = size;
+int done = 0;
+bool first = true;
+
+while (pending > 0) {
+int res;
+uint8_t *src;
+
+res = qemu_peek_buffer(f, &src, MIN(pending, IO_BUF_SIZE), 0);
+if (res == 0) {
+return done;
+}
+qemu_file_skip(f, res);
+done += res;
+pending -= res;
+if (first && res == size) {
+*buf = src;
+return done;
+} else {
+first = false;
+memcpy(buf, src, res);
+buf += res;
+}
+}
+return done;
+}
+
+/*
  * Peeks a single byte from the buffer; this isn't guaranteed to work if
  * offset leaves a gap after the previous read/peeked data.
  */
-- 
2.1.0




[Qemu-devel] [PATCH v6 07/47] Move copy out of qemu_peek_buffer

2015-04-14 Thread Dr. David Alan Gilbert (git)
From: "Dr. David Alan Gilbert" 

qemu_peek_buffer currently copies the data it reads into a buffer,
however the next patch wants access to the buffer without the copy,
hence rework to remove the copy to the layer above.

Signed-off-by: Dr. David Alan Gilbert 
---
 include/migration/qemu-file.h |  2 +-
 migration/qemu-file.c | 12 +++-
 migration/vmstate.c   |  5 +++--
 3 files changed, 11 insertions(+), 8 deletions(-)

diff --git a/include/migration/qemu-file.h b/include/migration/qemu-file.h
index 236a2e4..3fe545e 100644
--- a/include/migration/qemu-file.h
+++ b/include/migration/qemu-file.h
@@ -157,7 +157,7 @@ static inline void qemu_put_ubyte(QEMUFile *f, unsigned int 
v)
 void qemu_put_be16(QEMUFile *f, unsigned int v);
 void qemu_put_be32(QEMUFile *f, unsigned int v);
 void qemu_put_be64(QEMUFile *f, uint64_t v);
-int qemu_peek_buffer(QEMUFile *f, uint8_t *buf, int size, size_t offset);
+int qemu_peek_buffer(QEMUFile *f, uint8_t **buf, int size, size_t offset);
 int qemu_get_buffer(QEMUFile *f, uint8_t *buf, int size);
 /*
  * Note that you can only peek continuous bytes from where the current pointer
diff --git a/migration/qemu-file.c b/migration/qemu-file.c
index 6c18e55..8dc5767 100644
--- a/migration/qemu-file.c
+++ b/migration/qemu-file.c
@@ -348,14 +348,14 @@ void qemu_file_skip(QEMUFile *f, int size)
 }
 
 /*
- * Read 'size' bytes from file (at 'offset') into buf without moving the
- * pointer.
+ * Read 'size' bytes from file (at 'offset') without moving the
+ * pointer and set 'buf' to point to that data.
  *
  * It will return size bytes unless there was an error, in which case it will
  * return as many as it managed to read (assuming blocking fd's which
  * all current QEMUFile are)
  */
-int qemu_peek_buffer(QEMUFile *f, uint8_t *buf, int size, size_t offset)
+int qemu_peek_buffer(QEMUFile *f, uint8_t **buf, int size, size_t offset)
 {
 int pending;
 int index;
@@ -391,7 +391,7 @@ int qemu_peek_buffer(QEMUFile *f, uint8_t *buf, int size, 
size_t offset)
 size = pending;
 }
 
-memcpy(buf, f->buf + index, size);
+*buf = f->buf + index;
 return size;
 }
 
@@ -410,11 +410,13 @@ int qemu_get_buffer(QEMUFile *f, uint8_t *buf, int size)
 
 while (pending > 0) {
 int res;
+uint8_t *src;
 
-res = qemu_peek_buffer(f, buf, MIN(pending, IO_BUF_SIZE), 0);
+res = qemu_peek_buffer(f, &src, MIN(pending, IO_BUF_SIZE), 0);
 if (res == 0) {
 return done;
 }
+memcpy(buf, src, res);
 qemu_file_skip(f, res);
 buf += res;
 pending -= res;
diff --git a/migration/vmstate.c b/migration/vmstate.c
index e5388f0..a64ebcc 100644
--- a/migration/vmstate.c
+++ b/migration/vmstate.c
@@ -358,7 +358,7 @@ static int vmstate_subsection_load(QEMUFile *f, const 
VMStateDescription *vmsd,
 trace_vmstate_subsection_load(vmsd->name);
 
 while (qemu_peek_byte(f, 0) == QEMU_VM_SUBSECTION) {
-char idstr[256];
+char idstr[256], *idstr_ret;
 int ret;
 uint8_t version_id, len, size;
 const VMStateDescription *sub_vmsd;
@@ -369,11 +369,12 @@ static int vmstate_subsection_load(QEMUFile *f, const 
VMStateDescription *vmsd,
 trace_vmstate_subsection_load_bad(vmsd->name, "(short)");
 return 0;
 }
-size = qemu_peek_buffer(f, (uint8_t *)idstr, len, 2);
+size = qemu_peek_buffer(f, (uint8_t **)&idstr_ret, len, 2);
 if (size != len) {
 trace_vmstate_subsection_load_bad(vmsd->name, "(peek fail)");
 return 0;
 }
+memcpy(idstr, idstr_ret, size);
 idstr[size] = 0;
 
 if (strncmp(vmsd->name, idstr, strlen(vmsd->name)) != 0) {
-- 
2.1.0




[Qemu-devel] [PATCH v6 09/47] Add wrapper for setting blocking status on a QEMUFile

2015-04-14 Thread Dr. David Alan Gilbert (git)
From: "Dr. David Alan Gilbert" 

Add a wrapper to change the blocking status on a QEMUFile
rather than having to use qemu_set_block(qemu_get_fd(f));
it seems best to avoid exposing the fd since not all QEMUFile's
really have one.  With this wrapper we could move the implementation
down to be different on different transports.

Signed-off-by: Dr. David Alan Gilbert 
---
 include/migration/qemu-file.h |  1 +
 migration/qemu-file.c | 15 +++
 2 files changed, 16 insertions(+)

diff --git a/include/migration/qemu-file.h b/include/migration/qemu-file.h
index 4cac58f..c14555d 100644
--- a/include/migration/qemu-file.h
+++ b/include/migration/qemu-file.h
@@ -190,6 +190,7 @@ int qemu_file_get_error(QEMUFile *f);
 void qemu_file_set_error(QEMUFile *f, int ret);
 int qemu_file_shutdown(QEMUFile *f);
 void qemu_fflush(QEMUFile *f);
+void qemu_file_change_blocking(QEMUFile *f, bool block);
 
 static inline void qemu_put_be64s(QEMUFile *f, const uint64_t *pv)
 {
diff --git a/migration/qemu-file.c b/migration/qemu-file.c
index ec3a598..d84830f 100644
--- a/migration/qemu-file.c
+++ b/migration/qemu-file.c
@@ -609,3 +609,18 @@ int qemu_get_counted_string(QEMUFile *f, char buf[256])
 return res != len;
 }
 
+/*
+ * Change the blocking state of the QEMUFile.
+ * Note: On some transports the OS only keeps a single blocking state for
+ *   both directions, and thus changing the blocking on the main
+ *   QEMUFile can also affect the return path.
+ */
+void qemu_file_change_blocking(QEMUFile *f, bool block)
+{
+if (block) {
+qemu_set_block(qemu_get_fd(f));
+} else {
+qemu_set_nonblock(qemu_get_fd(f));
+}
+}
+
-- 
2.1.0




[Qemu-devel] [PATCH v6 11/47] Return path: Open a return path on QEMUFile for sockets

2015-04-14 Thread Dr. David Alan Gilbert (git)
From: "Dr. David Alan Gilbert" 

Postcopy needs a method to send messages from the destination back to
the source, this is the 'return path'.

Wire it up for 'socket' QEMUFile's using a dup'd fd.

Signed-off-by: Dr. David Alan Gilbert 
---
 include/migration/qemu-file.h |  7 +
 migration/qemu-file-unix.c| 65 +--
 migration/qemu-file.c | 12 
 3 files changed, 75 insertions(+), 9 deletions(-)

diff --git a/include/migration/qemu-file.h b/include/migration/qemu-file.h
index c14555d..1ff8fbc 100644
--- a/include/migration/qemu-file.h
+++ b/include/migration/qemu-file.h
@@ -85,6 +85,11 @@ typedef size_t (QEMURamSaveFunc)(QEMUFile *f, void *opaque,
uint64_t *bytes_sent);
 
 /*
+ * Return a QEMUFile for comms in the opposite direction
+ */
+typedef QEMUFile *(QEMURetPathFunc)(void *opaque);
+
+/*
  * Stop any read or write (depending on flags) on the underlying
  * transport on the QEMUFile.
  * Existing blocking reads/writes must be woken
@@ -102,6 +107,7 @@ typedef struct QEMUFileOps {
 QEMURamHookFunc *after_ram_iterate;
 QEMURamHookFunc *hook_ram_load;
 QEMURamSaveFunc *save_page;
+QEMURetPathFunc *get_return_path;
 QEMUFileShutdownFunc *shut_down;
 } QEMUFileOps;
 
@@ -189,6 +195,7 @@ int64_t qemu_file_get_rate_limit(QEMUFile *f);
 int qemu_file_get_error(QEMUFile *f);
 void qemu_file_set_error(QEMUFile *f, int ret);
 int qemu_file_shutdown(QEMUFile *f);
+QEMUFile *qemu_file_get_return_path(QEMUFile *f);
 void qemu_fflush(QEMUFile *f);
 void qemu_file_change_blocking(QEMUFile *f, bool block);
 
diff --git a/migration/qemu-file-unix.c b/migration/qemu-file-unix.c
index bfbc086..1e7de7b 100644
--- a/migration/qemu-file-unix.c
+++ b/migration/qemu-file-unix.c
@@ -96,6 +96,52 @@ static int socket_shutdown(void *opaque, bool rd, bool wr)
 }
 }
 
+/*
+ * Give a QEMUFile* off the same socket but data in the opposite
+ * direction.
+ */
+static QEMUFile *socket_dup_return_path(void *opaque)
+{
+QEMUFileSocket *qfs = opaque;
+int revfd;
+bool this_is_read;
+QEMUFile *result;
+
+if (qemu_file_get_error(qfs->file)) {
+/* If the forward file is in error, don't try and open a return */
+return NULL;
+}
+
+/* I don't think there's a better way to tell which direction 'this' is */
+this_is_read = qfs->file->ops->get_buffer != NULL;
+
+revfd = dup(qfs->fd);
+if (revfd == -1) {
+error_report("Error duplicating fd for return path: %s",
+  strerror(errno));
+return NULL;
+}
+
+result = qemu_fopen_socket(revfd, this_is_read ? "wb" : "rb");
+
+if (!result) {
+close(revfd);
+}
+
+if (this_is_read) {
+/* The qemu_fopen_socket "wb" will mark the socket blocking,
+ * which would be OK for the return path, but the semantics
+ * of non-blocking is that it follows the underlying connection
+ * not the fd number, and thus setting the return path non-blocking
+ * ends up setting the forward path blocking, which we don't want
+ */
+qemu_set_nonblock(revfd);
+}
+
+
+return result;
+}
+
 static ssize_t unix_writev_buffer(void *opaque, struct iovec *iov, int iovcnt,
   int64_t pos)
 {
@@ -204,18 +250,19 @@ QEMUFile *qemu_fdopen(int fd, const char *mode)
 }
 
 static const QEMUFileOps socket_read_ops = {
-.get_fd = socket_get_fd,
-.get_buffer = socket_get_buffer,
-.close  = socket_close,
-.shut_down  = socket_shutdown
-
+.get_fd  = socket_get_fd,
+.get_buffer  = socket_get_buffer,
+.close   = socket_close,
+.shut_down   = socket_shutdown,
+.get_return_path = socket_dup_return_path
 };
 
 static const QEMUFileOps socket_write_ops = {
-.get_fd= socket_get_fd,
-.writev_buffer = socket_writev_buffer,
-.close = socket_close,
-.shut_down = socket_shutdown
+.get_fd  = socket_get_fd,
+.writev_buffer   = socket_writev_buffer,
+.close   = socket_close,
+.shut_down   = socket_shutdown,
+.get_return_path = socket_dup_return_path
 };
 
 QEMUFile *qemu_fopen_socket(int fd, const char *mode)
diff --git a/migration/qemu-file.c b/migration/qemu-file.c
index d84830f..8b2ae8d 100644
--- a/migration/qemu-file.c
+++ b/migration/qemu-file.c
@@ -42,6 +42,18 @@ int qemu_file_shutdown(QEMUFile *f)
 return f->ops->shut_down(f->opaque, true, true);
 }
 
+/*
+ * Result: QEMUFile* for a 'return path' for comms in the opposite direction
+ * NULL if not available
+ */
+QEMUFile *qemu_file_get_return_path(QEMUFile *f)
+{
+if (!f->ops->get_return_path) {
+return NULL;
+}
+return f->ops->get_return_path(f->opaque);
+}
+
 bool qemu_file_mode_is_not_valid(const char *mode)
 {
 if (mode == NULL ||
-- 
2.1.0




[Qemu-devel] [PATCH v6 05/47] Create MigrationIncomingState

2015-04-14 Thread Dr. David Alan Gilbert (git)
From: "Dr. David Alan Gilbert" 

There are currently lots of pieces of incoming migration state scattered
around, and postcopy is adding more, and it seems better to try and keep
it together.

allocate MIS in process_incoming_migration_co

Signed-off-by: Dr. David Alan Gilbert 
---
 include/migration/migration.h |  9 +
 include/qemu/typedefs.h   |  1 +
 migration/migration.c | 28 
 savevm.c  |  2 ++
 4 files changed, 40 insertions(+)

diff --git a/include/migration/migration.h b/include/migration/migration.h
index bf09968..7a6f521 100644
--- a/include/migration/migration.h
+++ b/include/migration/migration.h
@@ -42,6 +42,15 @@ struct MigrationParams {
 
 typedef struct MigrationState MigrationState;
 
+/* State for the incoming migration */
+struct MigrationIncomingState {
+QEMUFile *file;
+};
+
+MigrationIncomingState *migration_incoming_get_current(void);
+MigrationIncomingState *migration_incoming_state_new(QEMUFile *f);
+void migration_incoming_state_destroy(void);
+
 struct MigrationState
 {
 int64_t bandwidth_limit;
diff --git a/include/qemu/typedefs.h b/include/qemu/typedefs.h
index cde3314..74dfad3 100644
--- a/include/qemu/typedefs.h
+++ b/include/qemu/typedefs.h
@@ -38,6 +38,7 @@ typedef struct MemoryListener MemoryListener;
 typedef struct MemoryMappingList MemoryMappingList;
 typedef struct MemoryRegion MemoryRegion;
 typedef struct MemoryRegionSection MemoryRegionSection;
+typedef struct MigrationIncomingState MigrationIncomingState;
 typedef struct MigrationParams MigrationParams;
 typedef struct Monitor Monitor;
 typedef struct MouseTransformInfo MouseTransformInfo;
diff --git a/migration/migration.c b/migration/migration.c
index ce6c2e3..ce488cf 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -45,6 +45,7 @@ static bool deferred_incoming;
migrations at once.  For now we don't need to add
dynamic creation of migration */
 
+/* For outgoing */
 MigrationState *migrate_get_current(void)
 {
 static MigrationState current_migration = {
@@ -57,6 +58,28 @@ MigrationState *migrate_get_current(void)
 return ¤t_migration;
 }
 
+/* For incoming */
+static MigrationIncomingState *mis_current;
+
+MigrationIncomingState *migration_incoming_get_current(void)
+{
+return mis_current;
+}
+
+MigrationIncomingState *migration_incoming_state_new(QEMUFile* f)
+{
+mis_current = g_malloc0(sizeof(MigrationIncomingState));
+mis_current->file = f;
+
+return mis_current;
+}
+
+void migration_incoming_state_destroy(void)
+{
+g_free(mis_current);
+mis_current = NULL;
+}
+
 /*
  * Called on -incoming with a defer: uri.
  * The migration can be started later after any parameters have been
@@ -101,9 +124,14 @@ static void process_incoming_migration_co(void *opaque)
 Error *local_err = NULL;
 int ret;
 
+migration_incoming_state_new(f);
+
 ret = qemu_loadvm_state(f);
+
 qemu_fclose(f);
 free_xbzrle_decoded_buf();
+migration_incoming_state_destroy();
+
 if (ret < 0) {
 error_report("load of migration failed: %s", strerror(-ret));
 exit(EXIT_FAILURE);
diff --git a/savevm.c b/savevm.c
index 9795e2e..81f6a29 100644
--- a/savevm.c
+++ b/savevm.c
@@ -1320,9 +1320,11 @@ int load_vmstate(const char *name)
 }
 
 qemu_system_reset(VMRESET_SILENT);
+migration_incoming_state_new(f);
 ret = qemu_loadvm_state(f);
 
 qemu_fclose(f);
+migration_incoming_state_destroy();
 if (ret < 0) {
 error_report("Error %d while loading VM state", ret);
 return ret;
-- 
2.1.0




[Qemu-devel] [PATCH v6 13/47] Migration commands

2015-04-14 Thread Dr. David Alan Gilbert (git)
From: "Dr. David Alan Gilbert" 

Create QEMU_VM_COMMAND section type for sending commands from
source to destination.  These commands are not intended to convey
guest state but to control the migration process.

For use in postcopy.

Signed-off-by: Dr. David Alan Gilbert 
---
 include/migration/migration.h |  1 +
 include/sysemu/sysemu.h   |  7 +++
 savevm.c  | 47 +++
 trace-events  |  1 +
 4 files changed, 56 insertions(+)

diff --git a/include/migration/migration.h b/include/migration/migration.h
index 7a6f521..f221c99 100644
--- a/include/migration/migration.h
+++ b/include/migration/migration.h
@@ -34,6 +34,7 @@
 #define QEMU_VM_SECTION_FULL 0x04
 #define QEMU_VM_SUBSECTION   0x05
 #define QEMU_VM_VMDESCRIPTION0x06
+#define QEMU_VM_COMMAND  0x07
 
 struct MigrationParams {
 bool blk;
diff --git a/include/sysemu/sysemu.h b/include/sysemu/sysemu.h
index 8402e6e..e82b205 100644
--- a/include/sysemu/sysemu.h
+++ b/include/sysemu/sysemu.h
@@ -82,6 +82,11 @@ void hmp_info_snapshots(Monitor *mon, const QDict *qdict);
 
 void qemu_announce_self(void);
 
+/* Subcommands for QEMU_VM_COMMAND */
+enum qemu_vm_cmd {
+MIG_CMD_INVALID = 0,   /* Must be 0 */
+};
+
 bool qemu_savevm_state_blocked(Error **errp);
 void qemu_savevm_state_begin(QEMUFile *f,
  const MigrationParams *params);
@@ -90,6 +95,8 @@ int qemu_savevm_state_iterate(QEMUFile *f);
 void qemu_savevm_state_complete_precopy(QEMUFile *f);
 void qemu_savevm_state_cancel(void);
 uint64_t qemu_savevm_state_pending(QEMUFile *f, uint64_t max_size);
+void qemu_savevm_command_send(QEMUFile *f, enum qemu_vm_cmd command,
+  uint16_t len, uint8_t *data);
 int qemu_loadvm_state(QEMUFile *f);
 
 typedef enum DisplayType
diff --git a/savevm.c b/savevm.c
index eba9174..2dc5fbb 100644
--- a/savevm.c
+++ b/savevm.c
@@ -602,6 +602,24 @@ static void vmstate_save(QEMUFile *f, SaveStateEntry *se, 
QJSON *vmdesc)
 vmstate_save_state(f, se->vmsd, se->opaque, vmdesc);
 }
 
+
+/* Send a 'QEMU_VM_COMMAND' type element with the command
+ * and associated data.
+ */
+void qemu_savevm_command_send(QEMUFile *f,
+  enum qemu_vm_cmd command,
+  uint16_t len,
+  uint8_t *data)
+{
+qemu_put_byte(f, QEMU_VM_COMMAND);
+qemu_put_be16(f, (uint16_t)command);
+qemu_put_be16(f, len);
+if (len) {
+qemu_put_buffer(f, data, len);
+}
+qemu_fflush(f);
+}
+
 bool qemu_savevm_state_blocked(Error **errp)
 {
 SaveStateEntry *se;
@@ -927,6 +945,29 @@ static SaveStateEntry *find_se(const char *idstr, int 
instance_id)
 return NULL;
 }
 
+/*
+ * Process an incoming 'QEMU_VM_COMMAND'
+ * negative return on error (will issue error message)
+ */
+static int loadvm_process_command(QEMUFile *f)
+{
+uint16_t com;
+uint16_t len;
+
+com = qemu_get_be16(f);
+len = qemu_get_be16(f);
+
+trace_loadvm_process_command(com, len);
+switch (com) {
+
+default:
+error_report("VM_COMMAND 0x%x unknown (len 0x%x)", com, len);
+return -1;
+}
+
+return 0;
+}
+
 typedef struct LoadStateEntry {
 QLIST_ENTRY(LoadStateEntry) entry;
 SaveStateEntry *se;
@@ -1042,6 +1083,12 @@ int qemu_loadvm_state(QEMUFile *f)
 goto out;
 }
 break;
+case QEMU_VM_COMMAND:
+ret = loadvm_process_command(f);
+if (ret < 0) {
+goto out;
+}
+break;
 default:
 error_report("Unknown savevm section type %d", section_type);
 ret = -EINVAL;
diff --git a/trace-events b/trace-events
index 39957fe..4d093dc 100644
--- a/trace-events
+++ b/trace-events
@@ -1171,6 +1171,7 @@ vmware_setmode(uint32_t w, uint32_t h, uint32_t bpp) 
"%dx%d @ %d bpp"
 qemu_loadvm_state_section(unsigned int section_type) "%d"
 qemu_loadvm_state_section_partend(uint32_t section_id) "%u"
 qemu_loadvm_state_section_startfull(uint32_t section_id, const char *idstr, 
uint32_t instance_id, uint32_t version_id) "%u(%s) %u %u"
+loadvm_process_command(uint16_t com, uint16_t len) "com=0x%x len=%d"
 savevm_section_start(const char *id, unsigned int section_id) "%s, section_id 
%u"
 savevm_section_end(const char *id, unsigned int section_id, int ret) "%s, 
section_id %u -> %d"
 savevm_state_begin(void) ""
-- 
2.1.0




[Qemu-devel] [PATCH v6 03/47] qemu_ram_foreach_block: pass up error value, and down the ramblock name

2015-04-14 Thread Dr. David Alan Gilbert (git)
From: "Dr. David Alan Gilbert" 

check the return value of the function it calls and error if it's non-0
Fixup qemu_rdma_init_one_block that is the only current caller,
  and rdma_add_block the only function it calls using it.

Pass the name of the ramblock to the function; helps in debugging.

Signed-off-by: Dr. David Alan Gilbert 
Reviewed-by: David Gibson 
---
 exec.c| 10 --
 include/exec/cpu-common.h |  4 ++--
 migration/rdma.c  |  4 ++--
 3 files changed, 12 insertions(+), 6 deletions(-)

diff --git a/exec.c b/exec.c
index 874ecfc..7693794 100644
--- a/exec.c
+++ b/exec.c
@@ -3067,14 +3067,20 @@ bool cpu_physical_memory_is_io(hwaddr phys_addr)
  memory_region_is_romd(mr));
 }
 
-void qemu_ram_foreach_block(RAMBlockIterFunc func, void *opaque)
+int qemu_ram_foreach_block(RAMBlockIterFunc func, void *opaque)
 {
 RAMBlock *block;
+int ret = 0;
 
 rcu_read_lock();
 QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
-func(block->host, block->offset, block->used_length, opaque);
+ret = func(block->idstr, block->host, block->offset,
+   block->used_length, opaque);
+if (ret) {
+break;
+}
 }
 rcu_read_unlock();
+return ret;
 }
 #endif
diff --git a/include/exec/cpu-common.h b/include/exec/cpu-common.h
index fcc3162..2abecac 100644
--- a/include/exec/cpu-common.h
+++ b/include/exec/cpu-common.h
@@ -125,10 +125,10 @@ void cpu_flush_icache_range(hwaddr start, int len);
 extern struct MemoryRegion io_mem_rom;
 extern struct MemoryRegion io_mem_notdirty;
 
-typedef void (RAMBlockIterFunc)(void *host_addr,
+typedef int (RAMBlockIterFunc)(const char *block_name, void *host_addr,
 ram_addr_t offset, ram_addr_t length, void *opaque);
 
-void qemu_ram_foreach_block(RAMBlockIterFunc func, void *opaque);
+int qemu_ram_foreach_block(RAMBlockIterFunc func, void *opaque);
 
 #endif
 
diff --git a/migration/rdma.c b/migration/rdma.c
index 77e3444..c13ec6b 100644
--- a/migration/rdma.c
+++ b/migration/rdma.c
@@ -570,10 +570,10 @@ static int rdma_add_block(RDMAContext *rdma, void 
*host_addr,
  * in advanced before the migration starts. This tells us where the RAM blocks
  * are so that we can register them individually.
  */
-static void qemu_rdma_init_one_block(void *host_addr,
+static int qemu_rdma_init_one_block(const char *block_name, void *host_addr,
 ram_addr_t block_offset, ram_addr_t length, void *opaque)
 {
-rdma_add_block(opaque, host_addr, block_offset, length);
+return rdma_add_block(opaque, host_addr, block_offset, length);
 }
 
 /*
-- 
2.1.0




[Qemu-devel] [PATCH v6 06/47] Provide runtime Target page information

2015-04-14 Thread Dr. David Alan Gilbert (git)
From: "Dr. David Alan Gilbert" 

The migration code generally is built target-independent, however
there are a few places where knowing the target page size would
avoid artificially moving stuff into arch_init.

Provide 'qemu_target_page_bits()' that returns TARGET_PAGE_BITS
to other bits of code so that they can stay target-independent.

Signed-off-by: Dr. David Alan Gilbert 
---
 exec.c  | 10 ++
 include/sysemu/sysemu.h |  1 +
 2 files changed, 11 insertions(+)

diff --git a/exec.c b/exec.c
index 7693794..c3027cf 100644
--- a/exec.c
+++ b/exec.c
@@ -3038,6 +3038,16 @@ int cpu_memory_rw_debug(CPUState *cpu, target_ulong addr,
 }
 return 0;
 }
+
+/*
+ * Allows code that needs to deal with migration bitmaps etc to still be built
+ * target independent.
+ */
+size_t qemu_target_page_bits(void)
+{
+return TARGET_PAGE_BITS;
+}
+
 #endif
 
 /*
diff --git a/include/sysemu/sysemu.h b/include/sysemu/sysemu.h
index 7a1ea91..bd67f86 100644
--- a/include/sysemu/sysemu.h
+++ b/include/sysemu/sysemu.h
@@ -68,6 +68,7 @@ int qemu_reset_requested_get(void);
 void qemu_system_killed(int signal, pid_t pid);
 void qemu_devices_reset(void);
 void qemu_system_reset(bool report);
+size_t qemu_target_page_bits(void);
 
 void qemu_add_exit_notifier(Notifier *notify);
 void qemu_remove_exit_notifier(Notifier *notify);
-- 
2.1.0




[Qemu-devel] [PATCH v6 01/47] Start documenting how postcopy works.

2015-04-14 Thread Dr. David Alan Gilbert (git)
From: "Dr. David Alan Gilbert" 

Signed-off-by: Dr. David Alan Gilbert 
---
 docs/migration.txt | 167 +
 1 file changed, 167 insertions(+)

diff --git a/docs/migration.txt b/docs/migration.txt
index 0492a45..f975c75 100644
--- a/docs/migration.txt
+++ b/docs/migration.txt
@@ -294,3 +294,170 @@ save/send this state when we are in the middle of a pio 
operation
 (that is what ide_drive_pio_state_needed() checks).  If DRQ_STAT is
 not enabled, the values on that fields are garbage and don't need to
 be sent.
+
+= Return path =
+
+In most migration scenarios there is only a single data path that runs
+from the source VM to the destination, typically along a single fd (although
+possibly with another fd or similar for some fast way of throwing pages 
across).
+
+However, some uses need two way communication; in particular the Postcopy 
destination
+needs to be able to request pages on demand from the source.
+
+For these scenarios there is a 'return path' from the destination to the 
source;
+qemu_file_get_return_path(QEMUFile* fwdpath) gives the QEMUFile* for the return
+path.
+
+  Source side
+ Forward path - written by migration thread
+ Return path  - opened by main thread, read by return-path thread
+
+  Destination side
+ Forward path - read by main thread
+ Return path  - opened by main thread, written by main thread AND postcopy
+thread (protected by rp_mutex)
+
+= Postcopy =
+'Postcopy' migration is a way to deal with migrations that refuse to converge;
+its plus side is that there is an upper bound on the amount of migration 
traffic
+and time it takes, the down side is that during the postcopy phase, a failure 
of
+*either* side or the network connection causes the guest to be lost.
+
+In postcopy the destination CPUs are started before all the memory has been
+transferred, and accesses to pages that are yet to be transferred cause
+a fault that's translated by QEMU into a request to the source QEMU.
+
+Postcopy can be combined with precopy (i.e. normal migration) so that if 
precopy
+doesn't finish in a given time the switch is made to postcopy.
+
+=== Enabling postcopy ===
+
+To enable postcopy (prior to the start of migration):
+
+migrate_set_capability x-postcopy-ram on
+
+The migration will still start in precopy mode, however issuing:
+
+migrate_start_postcopy
+
+will now cause the transition from precopy to postcopy.
+It can be issued immediately after migration is started or any
+time later on.  Issuing it after the end of a migration is harmless.
+
+=== Postcopy device transfer ===
+
+Loading of device data may cause the device emulation to access guest RAM
+that may trigger faults that have to be resolved by the source, as such
+the migration stream has to be able to respond with page data *during* the
+device load, and hence the device data has to be read from the stream 
completely
+before the device load begins to free the stream up.  This is achieved by
+'packaging' the device data into a blob that's read in one go.
+
+Source behaviour
+
+Until postcopy is entered the migration stream is identical to normal
+precopy, except for the addition of a 'postcopy advise' command at
+the beginning, to tell the destination that postcopy might happen.
+When postcopy starts the source sends the page discard data and then
+forms the 'package' containing:
+
+   Command: 'postcopy listen'
+   The device state
+  A series of sections, identical to the precopy streams device state 
stream
+  containing everything except postcopiable devices (i.e. RAM)
+   Command: 'postcopy run'
+
+The 'package' is sent as the data part of a Command: 'CMD_PACKAGED', and the
+contents are formatted in the same way as the main migration stream.
+
+Destination behaviour
+
+Initially the destination looks the same as precopy, with a single thread
+reading the migration stream; the 'postcopy advise' and 'discard' commands
+are processed to change the way RAM is managed, but don't affect the stream
+processing.
+
+--
+1  2   3 4 5  6   7
+main -DISCARD-CMD_PACKAGED ( LISTEN  DEVICE DEVICE DEVICE RUN )
+thread |   |
+   | (page request)
+   |\___
+   v\
+listen thread: --- page -- page -- page -- page -- page --
+
+   a   bc
+--
+
+On receipt of CMD_PACKAGED (1)
+   All the data associated with the package - the ( ... ) section in the
+diagram - is read into memory (into a QEMUSizedBuffer), and the main thread
+recurses into qemu_loadvm_state_main to process the contents of the package (2)
+which contains commands (3

[Qemu-devel] [PATCH v6 02/47] Split header writing out of qemu_savevm_state_begin

2015-04-14 Thread Dr. David Alan Gilbert (git)
From: "Dr. David Alan Gilbert" 

Split qemu_savevm_state_begin to:
  qemu_savevm_state_header   That writes the initial file header.
  qemu_savevm_state_beginThat sets up devices and does the first
 device pass.

Used later in postcopy.

Signed-off-by: Dr. David Alan Gilbert 
---
 include/sysemu/sysemu.h |  1 +
 migration/migration.c   |  1 +
 savevm.c| 11 ---
 trace-events|  1 +
 4 files changed, 11 insertions(+), 3 deletions(-)

diff --git a/include/sysemu/sysemu.h b/include/sysemu/sysemu.h
index 8a52934..7a1ea91 100644
--- a/include/sysemu/sysemu.h
+++ b/include/sysemu/sysemu.h
@@ -84,6 +84,7 @@ void qemu_announce_self(void);
 bool qemu_savevm_state_blocked(Error **errp);
 void qemu_savevm_state_begin(QEMUFile *f,
  const MigrationParams *params);
+void qemu_savevm_state_header(QEMUFile *f);
 int qemu_savevm_state_iterate(QEMUFile *f);
 void qemu_savevm_state_complete(QEMUFile *f);
 void qemu_savevm_state_cancel(void);
diff --git a/migration/migration.c b/migration/migration.c
index bc42490..ce6c2e3 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -617,6 +617,7 @@ static void *migration_thread(void *opaque)
 int64_t start_time = initial_time;
 bool old_vm_running = false;
 
+qemu_savevm_state_header(s->file);
 qemu_savevm_state_begin(s->file, &s->params);
 
 s->setup_time = qemu_clock_get_ms(QEMU_CLOCK_HOST) - setup_start;
diff --git a/savevm.c b/savevm.c
index 3b0e222..c08abcc 100644
--- a/savevm.c
+++ b/savevm.c
@@ -616,6 +616,13 @@ bool qemu_savevm_state_blocked(Error **errp)
 return false;
 }
 
+void qemu_savevm_state_header(QEMUFile *f)
+{
+trace_savevm_state_header();
+qemu_put_be32(f, QEMU_VM_FILE_MAGIC);
+qemu_put_be32(f, QEMU_VM_FILE_VERSION);
+}
+
 void qemu_savevm_state_begin(QEMUFile *f,
  const MigrationParams *params)
 {
@@ -630,9 +637,6 @@ void qemu_savevm_state_begin(QEMUFile *f,
 se->ops->set_params(params, se->opaque);
 }
 
-qemu_put_be32(f, QEMU_VM_FILE_MAGIC);
-qemu_put_be32(f, QEMU_VM_FILE_VERSION);
-
 QTAILQ_FOREACH(se, &savevm_handlers, entry) {
 int len;
 
@@ -842,6 +846,7 @@ static int qemu_savevm_state(QEMUFile *f, Error **errp)
 }
 
 qemu_mutex_unlock_iothread();
+qemu_savevm_state_header(f);
 qemu_savevm_state_begin(f, ¶ms);
 qemu_mutex_lock_iothread();
 
diff --git a/trace-events b/trace-events
index 30eba92..b4641b6 100644
--- a/trace-events
+++ b/trace-events
@@ -1174,6 +1174,7 @@ qemu_loadvm_state_section_startfull(uint32_t section_id, 
const char *idstr, uint
 savevm_section_start(const char *id, unsigned int section_id) "%s, section_id 
%u"
 savevm_section_end(const char *id, unsigned int section_id, int ret) "%s, 
section_id %u -> %d"
 savevm_state_begin(void) ""
+savevm_state_header(void) ""
 savevm_state_iterate(void) ""
 savevm_state_complete(void) ""
 savevm_state_cancel(void) ""
-- 
2.1.0




[Qemu-devel] [PATCH v6 04/47] Add qemu_get_counted_string to read a string prefixed by a count byte

2015-04-14 Thread Dr. David Alan Gilbert (git)
From: "Dr. David Alan Gilbert" 

and use it in loadvm_state and ram_load.

Signed-off-by: Dr. David Alan Gilbert 
---
 arch_init.c   |  5 +
 include/migration/qemu-file.h |  3 +++
 migration/qemu-file.c | 16 
 savevm.c  | 11 ++-
 4 files changed, 26 insertions(+), 9 deletions(-)

diff --git a/arch_init.c b/arch_init.c
index 4c8fcee..06722bb 100644
--- a/arch_init.c
+++ b/arch_init.c
@@ -1145,13 +1145,10 @@ static int ram_load(QEMUFile *f, void *opaque, int 
version_id)
 total_ram_bytes = addr;
 while (!ret && total_ram_bytes) {
 RAMBlock *block;
-uint8_t len;
 char id[256];
 ram_addr_t length;
 
-len = qemu_get_byte(f);
-qemu_get_buffer(f, (uint8_t *)id, len);
-id[len] = 0;
+qemu_get_counted_string(f, id);
 length = qemu_get_be64(f);
 
 QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
diff --git a/include/migration/qemu-file.h b/include/migration/qemu-file.h
index 745a850..236a2e4 100644
--- a/include/migration/qemu-file.h
+++ b/include/migration/qemu-file.h
@@ -309,4 +309,7 @@ static inline void qemu_get_sbe64s(QEMUFile *f, int64_t *pv)
 {
 qemu_get_be64s(f, (uint64_t *)pv);
 }
+
+int qemu_get_counted_string(QEMUFile *f, char buf[256]);
+
 #endif
diff --git a/migration/qemu-file.c b/migration/qemu-file.c
index 1a4f986..6c18e55 100644
--- a/migration/qemu-file.c
+++ b/migration/qemu-file.c
@@ -546,3 +546,19 @@ uint64_t qemu_get_be64(QEMUFile *f)
 v |= qemu_get_be32(f);
 return v;
 }
+
+/*
+ * Get a string whose length is determined by a single preceding byte
+ * A preallocated 256 byte buffer must be passed in.
+ * Returns: 0 on success and a 0 terminated string in the buffer
+ */
+int qemu_get_counted_string(QEMUFile *f, char buf[256])
+{
+unsigned int len = qemu_get_byte(f);
+int res = qemu_get_buffer(f, (uint8_t *)buf, len);
+
+buf[len] = 0;
+
+return res != len;
+}
+
diff --git a/savevm.c b/savevm.c
index c08abcc..9795e2e 100644
--- a/savevm.c
+++ b/savevm.c
@@ -969,8 +969,7 @@ int qemu_loadvm_state(QEMUFile *f)
 while ((section_type = qemu_get_byte(f)) != QEMU_VM_EOF) {
 uint32_t instance_id, version_id, section_id;
 SaveStateEntry *se;
-char idstr[257];
-int len;
+char idstr[256];
 
 trace_qemu_loadvm_state_section(section_type);
 switch (section_type) {
@@ -978,9 +977,11 @@ int qemu_loadvm_state(QEMUFile *f)
 case QEMU_VM_SECTION_FULL:
 /* Read section start */
 section_id = qemu_get_be32(f);
-len = qemu_get_byte(f);
-qemu_get_buffer(f, (uint8_t *)idstr, len);
-idstr[len] = 0;
+if (qemu_get_counted_string(f, idstr)) {
+error_report("Unable to read ID string for section %u",
+section_id);
+return -EINVAL;
+}
 instance_id = qemu_get_be32(f);
 version_id = qemu_get_be32(f);
 
-- 
2.1.0




[Qemu-devel] [PATCH v6 00/47] Postcopy implementation

2015-04-14 Thread Dr. David Alan Gilbert (git)
From: "Dr. David Alan Gilbert" 

  This is the 6th cut of my version of postcopy; it is designed for use with
the Linux kernel additions posted by Andrea Arcangeli here:

git clone --reference linux -b userfault18 
git://git.kernel.org/pub/scm/linux/kernel/git/andrea/aa.git

(Note this is a different API from the last version)

This qemu series can be found at:

https://github.com/orbitfp7/qemu.git
on the wp3-postcopy-v6 tag.

It addresses some but not yet all of the previous review comments;
however there are a couple of large simplifications, so it seems
worth posting to meet the new kernel API and to stop people
reviewing deadcode.

Note: That the userfaultfd.h header is no longer included in this
tree:
  - if you're building with the appropriate kernel headers it should find it
  - if you're building on a host that doesn't have the kernel headers
installed in the right place then:
   configure with:   --extra-cflags="-D__NR_userfaultfd=323"
   cp include/uapi/linux/userfaultfd.h into somewhere in the include
   path, e.g.  /usr/local/include/linux

v6
  Removed the PMI bitmaps
  - Andrea updated the kernel API so that userspace doesn't
need to do wakeups, and thus QEMU doesn't need to keep
track of which pages it's received; there is a price - which
is we end up sending more dupes to the source, but it simplifies
stuff a lot and makes the normal paths a lot quicker.
(10s of line change in kernel, 10%-ish simplification in this code!)
  Changed discard message format to a simpler start/end address scheme
and rework discard and chunking code to work in long's to match bitmap
  'qemu_get_buffer_less_copy' for postcopy pages
  - avoids a userspace copy since the kernel now does it
  - the new qemufile interface might also be useful for other places that
don't need a copy (maybe xbzrle?)
  Changed the blockingness of the incoming fd
  it was incorrectly blocking during the precopy phase after a postcopy was
  enabled, causing the HMP to be unavailable.  It's now blocking only once
  the postcopy thread starts up, since it's not a coroutine it can't deal
  with the yields in qemu_file.
  An error on the return-path now marks the migration as failed

  Fixups from Dave Gibson's comments
Removed can_postcopy, renamed save_complete to save_complete_precopy
added save_complete_postcopy
Simplified loadvm loop exits
discard message format changes above
and many more smaller changes.

  small fixups for RCU


This work has been partially funded by the EU Orbit project:
  see http://www.orbitproject.eu/about/

TODO:
  The major work is to rework the page send/receive loops so that supporting
  larger host pages doesn't make it quite as messy.

Dr. David Alan Gilbert (47):
  Start documenting how postcopy works.
  Split header writing out of qemu_savevm_state_begin
  qemu_ram_foreach_block: pass up error value, and down the ramblock
name
  Add qemu_get_counted_string to read a string prefixed by a count byte
  Create MigrationIncomingState
  Provide runtime Target page information
  Move copy out of qemu_peek_buffer
  Add qemu_get_buffer_less_copy to avoid copies some of the time
  Add wrapper for setting blocking status on a QEMUFile
  Rename save_live_complete to save_live_complete_precopy
  Return path: Open a return path on QEMUFile for sockets
  Return path: socket_writev_buffer: Block even on non-blocking fd's
  Migration commands
  Return path: Control commands
  Return path: Send responses from destination to source
  Return path: Source handling of return path
  ram_debug_dump_bitmap: Dump a migration bitmap as text
  Move loadvm_handlers into MigrationIncomingState
  Rework loadvm path for subloops
  Add migration-capability boolean for postcopy-ram.
  Add wrappers and handlers for sending/receiving the postcopy-ram
migration messages.
  MIG_CMD_PACKAGED: Send a packaged chunk of migration stream
  migrate_init: Call from savevm
  Modify save_live_pending for postcopy
  postcopy: OS support test
  migrate_start_postcopy: Command to trigger transition to postcopy
  MIGRATION_STATUS_POSTCOPY_ACTIVE: Add new migration state
  Add qemu_savevm_state_complete_postcopy
  Postcopy: Maintain sentmap and calculate discard
  postcopy: Incoming initialisation
  postcopy: ram_enable_notify to switch on userfault
  Postcopy: Postcopy startup in migration thread
  Postcopy end in migration_thread
  Page request:  Add MIG_RP_MSG_REQ_PAGES reverse command
  Page request: Process incoming page request
  Page request: Consume pages off the post-copy queue
  postcopy_ram.c: place_page and helpers
  Postcopy: Use helpers to map pages during migration
  qemu_ram_block_from_host
  Don't sync dirty bitmaps in postcopy
  Host page!=target page: Cleanup bitmaps
  Postcopy; Handle userfault requests
  Start up a postcopy/listener thread ready for incoming page data
  postcopy: Wir

Re: [Qemu-devel] [Spice-devel] [PATCH] spice: fix simple display on bigendian hosts

2015-04-14 Thread Denis Kirjanov
On 4/14/15, Denis Kirjanov  wrote:
> On 4/14/15, Denis Kirjanov  wrote:
>> On 4/14/15, Gerd Hoffmann  wrote:
>>> Denis Kirjanov is busy getting spice run on ppc64 and trapped into this
>>> one.  Spice wire format is little endian, so we have to explicitly say
>>> we want little endian when letting pixman convert the data for us.
>>>
>>> Reported-by: Denis Kirjanov 
>>> Signed-off-by: Gerd Hoffmann 
>>> ---
>> Yeah, that fixes the issue. Thanks Gerd!
>
> Looks like that the patch fixes the half of the problem: the inverted
> colors appear on client reconnect to vm

Program received signal SIGSEGV, Segmentation fault.
0x74e10258 in ?? () from /usr/lib/x86_64-linux-gnu/libpixman-1.so.0
(gdb) bt
#0  0x74e10258 in ?? () from /usr/lib/x86_64-linux-gnu/libpixman-1.so.0
#1  0x74e10239 in pixman_image_unref () from
/usr/lib/x86_64-linux-gnu/libpixman-1.so.0
#2  0x778e4117 in canvas_get_quic
(canvas=canvas@entry=0x7ceb80, image=image@entry=0xae2720,
want_original=want_original@entry=0) at
../spice-common/common/canvas_base.c:390
#3  0x778e686d in canvas_get_image_internal
(canvas=canvas@entry=0x7ceb80, image=0xae2720,
want_original=want_original@entry=0, real_get=real_get@entry=1) at
../spice-common/common/canvas_base.c:1146
#4  0x778e83fd in canvas_get_image (want_original=0,
image=, canvas=0x7ceb80)
at ../spice-common/common/canvas_base.c:1309
#5  canvas_draw_copy (spice_canvas=0x7ceb80, bbox=0xae26c4,
clip=, copy=0xae26e8)
at ../spice-common/common/canvas_base.c:2281
#6  0x778c98db in display_handle_draw_copy (channel=0x7c59b0,
in=0x841f00) at channel-display.c:1563
#7  0x778bfe7c in spice_channel_handle_msg (channel=0x7c59b0,
msg=0x841f00) at spice-channel.c:2858
#8  0x778bce6c in spice_channel_recv_msg (channel=0x7c59b0,
msg_handler=0x778bfd9f ,
data=0x0) at spice-channel.c:1869
#9  0x778bd4f3 in spice_channel_iterate_read
(channel=0x7c59b0) at spice-channel.c:2106
#10 0x778bd6fd in spice_channel_iterate (channel=0x7c59b0) at
spice-channel.c:2144
#11 0x778be4ad in spice_channel_coroutine (data=0x7c59b0) at
spice-channel.c:2430
#12 0x778ea6ff in coroutine_trampoline (cc=0x7c5058) at
coroutine_ucontext.c:63
#13 0x778ea4c9 in continuation_trampoline (i0=,
i1=) at continuation.c:55
#14 0x763278b0 in ?? () from /lib/x86_64-linux-gnu/libc.so.6
#15 0x007c5420 in ?? ()
#16 0x in ?? ()


>>>  include/ui/qemu-pixman.h | 2 ++
>>>  ui/spice-display.c   | 2 +-
>>>  2 files changed, 3 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/include/ui/qemu-pixman.h b/include/ui/qemu-pixman.h
>>> index 5d7a9ac..e34c4ef 100644
>>> --- a/include/ui/qemu-pixman.h
>>> +++ b/include/ui/qemu-pixman.h
>>> @@ -35,6 +35,7 @@
>>>  # define PIXMAN_BE_r8g8b8a8   PIXMAN_r8g8b8a8
>>>  # define PIXMAN_BE_x8b8g8r8   PIXMAN_x8b8g8r8
>>>  # define PIXMAN_BE_a8b8g8r8   PIXMAN_a8b8g8r8
>>> +# define PIXMAN_LE_x8r8g8b8   PIXMAN_b8g8r8x8
>>>  #else
>>>  # define PIXMAN_BE_r8g8b8 PIXMAN_b8g8r8
>>>  # define PIXMAN_BE_x8r8g8b8   PIXMAN_b8g8r8x8
>>> @@ -45,6 +46,7 @@
>>>  # define PIXMAN_BE_r8g8b8a8   PIXMAN_a8b8g8r8
>>>  # define PIXMAN_BE_x8b8g8r8   PIXMAN_r8g8b8x8
>>>  # define PIXMAN_BE_a8b8g8r8   PIXMAN_r8g8b8a8
>>> +# define PIXMAN_LE_x8r8g8b8   PIXMAN_x8r8g8b8
>>>  #endif
>>>
>>>  /* 
>>> */
>>> diff --git a/ui/spice-display.c b/ui/spice-display.c
>>> index 1644185..1a64e07 100644
>>> --- a/ui/spice-display.c
>>> +++ b/ui/spice-display.c
>>> @@ -178,7 +178,7 @@ static void
>>> qemu_spice_create_one_update(SimpleSpiceDisplay *ssd,
>>>  image->bitmap.palette = 0;
>>>  image->bitmap.format = SPICE_BITMAP_FMT_32BIT;
>>>
>>> -dest = pixman_image_create_bits(PIXMAN_x8r8g8b8, bw, bh,
>>> +dest = pixman_image_create_bits(PIXMAN_LE_x8r8g8b8, bw, bh,
>>>  (void *)update->bitmap, bw * 4);
>>>  pixman_image_composite(PIXMAN_OP_SRC, ssd->surface, NULL,
>>> ssd->mirror,
>>> rect->left, rect->top, 0, 0,
>>> --
>>> 1.8.3.1
>>>
>>> ___
>>> Spice-devel mailing list
>>> spice-de...@lists.freedesktop.org
>>> http://lists.freedesktop.org/mailman/listinfo/spice-devel
>>>
>>
>>
>> --
>> Regards,
>> Denis
>>
>
>
> --
> Regards,
> Denis
>


-- 
Regards,
Denis



Re: [Qemu-devel] [Spice-devel] [PATCH] spice: fix simple display on bigendian hosts

2015-04-14 Thread Denis Kirjanov
On 4/14/15, Gerd Hoffmann  wrote:
> Denis Kirjanov is busy getting spice run on ppc64 and trapped into this
> one.  Spice wire format is little endian, so we have to explicitly say
> we want little endian when letting pixman convert the data for us.
>
> Reported-by: Denis Kirjanov 
> Signed-off-by: Gerd Hoffmann 
> ---
Yeah, that fixes the issue. Thanks Gerd!

>  include/ui/qemu-pixman.h | 2 ++
>  ui/spice-display.c   | 2 +-
>  2 files changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/include/ui/qemu-pixman.h b/include/ui/qemu-pixman.h
> index 5d7a9ac..e34c4ef 100644
> --- a/include/ui/qemu-pixman.h
> +++ b/include/ui/qemu-pixman.h
> @@ -35,6 +35,7 @@
>  # define PIXMAN_BE_r8g8b8a8   PIXMAN_r8g8b8a8
>  # define PIXMAN_BE_x8b8g8r8   PIXMAN_x8b8g8r8
>  # define PIXMAN_BE_a8b8g8r8   PIXMAN_a8b8g8r8
> +# define PIXMAN_LE_x8r8g8b8   PIXMAN_b8g8r8x8
>  #else
>  # define PIXMAN_BE_r8g8b8 PIXMAN_b8g8r8
>  # define PIXMAN_BE_x8r8g8b8   PIXMAN_b8g8r8x8
> @@ -45,6 +46,7 @@
>  # define PIXMAN_BE_r8g8b8a8   PIXMAN_a8b8g8r8
>  # define PIXMAN_BE_x8b8g8r8   PIXMAN_r8g8b8x8
>  # define PIXMAN_BE_a8b8g8r8   PIXMAN_r8g8b8a8
> +# define PIXMAN_LE_x8r8g8b8   PIXMAN_x8r8g8b8
>  #endif
>
>  /*  */
> diff --git a/ui/spice-display.c b/ui/spice-display.c
> index 1644185..1a64e07 100644
> --- a/ui/spice-display.c
> +++ b/ui/spice-display.c
> @@ -178,7 +178,7 @@ static void
> qemu_spice_create_one_update(SimpleSpiceDisplay *ssd,
>  image->bitmap.palette = 0;
>  image->bitmap.format = SPICE_BITMAP_FMT_32BIT;
>
> -dest = pixman_image_create_bits(PIXMAN_x8r8g8b8, bw, bh,
> +dest = pixman_image_create_bits(PIXMAN_LE_x8r8g8b8, bw, bh,
>  (void *)update->bitmap, bw * 4);
>  pixman_image_composite(PIXMAN_OP_SRC, ssd->surface, NULL, ssd->mirror,
> rect->left, rect->top, 0, 0,
> --
> 1.8.3.1
>
> ___
> Spice-devel mailing list
> spice-de...@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/spice-devel
>


-- 
Regards,
Denis



Re: [Qemu-devel] [Spice-devel] [PATCH] spice: fix simple display on bigendian hosts

2015-04-14 Thread Denis Kirjanov
On 4/14/15, Denis Kirjanov  wrote:
> On 4/14/15, Gerd Hoffmann  wrote:
>> Denis Kirjanov is busy getting spice run on ppc64 and trapped into this
>> one.  Spice wire format is little endian, so we have to explicitly say
>> we want little endian when letting pixman convert the data for us.
>>
>> Reported-by: Denis Kirjanov 
>> Signed-off-by: Gerd Hoffmann 
>> ---
> Yeah, that fixes the issue. Thanks Gerd!

Looks like that the patch fixes the half of the problem: the inverted
colors appear on client reconnect to vm
>
>>  include/ui/qemu-pixman.h | 2 ++
>>  ui/spice-display.c   | 2 +-
>>  2 files changed, 3 insertions(+), 1 deletion(-)
>>
>> diff --git a/include/ui/qemu-pixman.h b/include/ui/qemu-pixman.h
>> index 5d7a9ac..e34c4ef 100644
>> --- a/include/ui/qemu-pixman.h
>> +++ b/include/ui/qemu-pixman.h
>> @@ -35,6 +35,7 @@
>>  # define PIXMAN_BE_r8g8b8a8   PIXMAN_r8g8b8a8
>>  # define PIXMAN_BE_x8b8g8r8   PIXMAN_x8b8g8r8
>>  # define PIXMAN_BE_a8b8g8r8   PIXMAN_a8b8g8r8
>> +# define PIXMAN_LE_x8r8g8b8   PIXMAN_b8g8r8x8
>>  #else
>>  # define PIXMAN_BE_r8g8b8 PIXMAN_b8g8r8
>>  # define PIXMAN_BE_x8r8g8b8   PIXMAN_b8g8r8x8
>> @@ -45,6 +46,7 @@
>>  # define PIXMAN_BE_r8g8b8a8   PIXMAN_a8b8g8r8
>>  # define PIXMAN_BE_x8b8g8r8   PIXMAN_r8g8b8x8
>>  # define PIXMAN_BE_a8b8g8r8   PIXMAN_r8g8b8a8
>> +# define PIXMAN_LE_x8r8g8b8   PIXMAN_x8r8g8b8
>>  #endif
>>
>>  /* 
>> */
>> diff --git a/ui/spice-display.c b/ui/spice-display.c
>> index 1644185..1a64e07 100644
>> --- a/ui/spice-display.c
>> +++ b/ui/spice-display.c
>> @@ -178,7 +178,7 @@ static void
>> qemu_spice_create_one_update(SimpleSpiceDisplay *ssd,
>>  image->bitmap.palette = 0;
>>  image->bitmap.format = SPICE_BITMAP_FMT_32BIT;
>>
>> -dest = pixman_image_create_bits(PIXMAN_x8r8g8b8, bw, bh,
>> +dest = pixman_image_create_bits(PIXMAN_LE_x8r8g8b8, bw, bh,
>>  (void *)update->bitmap, bw * 4);
>>  pixman_image_composite(PIXMAN_OP_SRC, ssd->surface, NULL,
>> ssd->mirror,
>> rect->left, rect->top, 0, 0,
>> --
>> 1.8.3.1
>>
>> ___
>> Spice-devel mailing list
>> spice-de...@lists.freedesktop.org
>> http://lists.freedesktop.org/mailman/listinfo/spice-devel
>>
>
>
> --
> Regards,
> Denis
>


-- 
Regards,
Denis



[Qemu-devel] feature proposal: checkpoint-assisted migration

2015-04-14 Thread Thomas Knauth
Dear list,

my research revolves around cloud computing, virtual machines and
migration. In this context I came across the following: a recent study
by IBM indicates that a typical VM only migrates between a small set
of physical servers; often just two.

The potential for optimization is clear. By storing a snapshot of the
VM's memory on the migration source, we can reuse (some) of this
information on a subsequent incoming migration.

In the course of our research we implemented a prototype of this
feature within kvm/qemu. We would like to contribute it to mainline,
but it needs cleanup and proper testing. As is the nature with
research prototypes, the code is ugly and not well integrated with the
existing kvm/qemu codebase. To avoid confusion and irritation, I want
to mention that I have little experience in contributing to large
open-source projects. So if I violate some unwritten protocol or best
practises, please be patient.

Initially, I'm hoping to get some feedback on the current state of the
implementation. It would be immensely helpful if someone more
intimately familiar with the migration code/framework could comment on
the prototyp's current state. The code very likely needs restructuring
to make it fit better into the overall codebase. Getting information
on what needs to change and how to change it would be my goal.

The prototype also touches the migration protocol. Changes in this
part probably need discussion. The basic idea is that if a block of
memory (e.g., a 4 KiB page) already exists at the migration
destination, than the source only sends a checksum of the block
(currently MD5). The destination uses the checksum to find the
corresponding block, e.g., by reading it from local storage (instead
of transferring it over the network). This definitely reduces the
migration traffic and usually also the overall migration time.

We currently use MD5 checksums to identify (un)modified blocks. For
strict ping-pong migration, where a VM only migrates between two
servers, there is also the possibility to use dirty page tracking to
identify modified pages. This has not been implemented so far. We are
also unclear about the potential performance tradeoffs this might
entail and how it would interact with the dirty page tracking code
during a live migration.

Our research also includes a look at real world data to motivate that
this optimization actually does make sense in practise. If you are
interested, you can find a draft of the relevant paper at:

https://www.dropbox.com/s/v7qzim8exmji6j5/paper.pdf?dl=0

Keep in mind that the paper is not published yet and, hence, work in progress.

As you can see, there are many open/unanswered questions, but I'm
hopeful that this feature will eventually become part of kvm/qemu such
that everyone can benefit from it.

Please find the current code at
https://bitbucket.org/tknauth/vecycle-qemu/branch/checkpoint-assisted-migration

Looking forward to your feedback,
Thomas.



[Qemu-devel] QUESTION:use tlb with alignment address

2015-04-14 Thread pandayt
hi all,
  When translate a read/write instruction, qemu will check tlb first, but why 
alignment is needed when read/write data is 2/4/8 bytes?



  For example, if there's a instruction which read a double word(such as mov 
ebx, [eax]), and the source address(i.e. eax) is 0x00401003, not aligns by 4. 
So the generated code can not use the tlb but jump to helper_ld_xxx function, 
we know that the 'helper' function is much more slower than tlb.


  I think when reading memory, no mater 1byte, 2byte, 4byte, we can use tlb as 
long as the data in the same page.
  Am I right? 
  Thanks.



Re: [Qemu-devel] [PATCH 10/15] smbios: Add a function to directly add an entry

2015-04-14 Thread Paolo Bonzini


On 14/04/2015 08:31, Michael S. Tsirkin wrote:
> On Mon, Apr 13, 2015 at 06:40:46PM +0200, Paolo Bonzini wrote:
>>
>>
>> On 13/04/2015 18:34, Corey Minyard wrote:
> I made this the same as the ACPI code, which you have to have as a
> callback if you are adding it to a common SSDT.

 Not really I think.
>>>
>>> The AML functions require that you have a tree to attach what you are
>>> adding.  If you did your own SSDT, you wouldn't need a callback.  You
>>> could add a binary blob that gets put into the SSDT, but I think that
>>> would require adding some AML functions.
>>
>> I very much prefer the callback idea.  Long term it could be used by
>> more devices and possibly it could be turned into an AMLProvider QOM
>> interface.  Then the ACPI builder could iterate on all QOM devices and
>> just ask which of them can provide some AML.
> 
> Yes, that would make sense. Devices which have a static
> AML could provide a static AML property, with very little code,
> those that have dynamic AML - dynamic AML property with more code.

Why complicate the ACPI hooks unnecessarily?  The generic code is
written to support dynamic AML, why should devices support both?  This
seems like premature optimization.

> I was looking for ways to remove dependencies for this patchset,
> not add them.

I'm not sure which dependencies these are.

>> Also, tables are rebuilt when the firmware loads them, and handing in a
>> blob makes it harder to achieve this on-the-fly modification, compared
>> to callbacks.
> 
> Well that's not true for smbios, is it?

No, it's not.  For smbios, I agree that passing a static table to the
API (i.e. having smbios_register_device_table instead of
smbios_register_device_table_handler, and calling smbios_table_entry_add
directly from smbios_get_tables) is also a valid choice.

Paolo



Re: [Qemu-devel] [libvirt] [PATCH 3/5] qemu: add QEMU_CAPS_MACHINE_VMPORT_OPT

2015-04-14 Thread Martin Kletzander

On Tue, Apr 14, 2015 at 10:07:00AM -0600, Eric Blake wrote:

[adding qemu]

On 04/14/2015 09:58 AM, Marc-André Lureau wrote:

Hi

On Tue, Apr 14, 2015 at 4:25 PM, Martin Kletzander 
wrote:


Is this not exposed in any way in QEMU?  Do we really need to use this
(what we're trying to avoid)?



That works with the following change:

diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c
index 768cef1..1b20a7f 100644
--- a/src/qemu/qemu_capabilities.c
+++ b/src/qemu/qemu_capabilities.c
@@ -2510,6 +2510,7 @@ struct virQEMUCapsCommandLineProps {

 static struct virQEMUCapsCommandLineProps virQEMUCapsCommandLine[] = {
 { "machine", "mem-merge", QEMU_CAPS_MEM_MERGE },
+{ "machine", "vmport", QEMU_CAPS_MACHINE_VMPORT_OPT },


Ouch.  qemu commit 0a7cf21 fixes what would have been a regression in
2.3 at exposing "mem-merge" through query-command-line-options, but it
does NOT expose "vmport", which is a per-architecture option rather than
a generic -machine option.  Which means that even though qemu 2.2
(perhaps wrongly) advertised "vmport" for all machines (even when it was
not supported), 2.3 will not advertise it, and we are hoping for a
better solution in 2.4 for properly advertising vmport on an
as-appropriate basis.

Yes, we WANT to use QMP probing,...


 { "drive", "discard", QEMU_CAPS_DRIVE_DISCARD },
 { "realtime", "mlock", QEMU_CAPS_MLOCK },
 { "boot-opts", "strict", QEMU_CAPS_BOOT_STRICT },
@@ -3243,10 +3244,6 @@ virQEMUCapsInitQMPMonitor(virQEMUCapsPtr qemuCaps,
 if (qemuCaps->version >= 1003000)
 virQEMUCapsSet(qemuCaps, QEMU_CAPS_MACHINE_USB_OPT);

-/* vmport option is supported v2.2.0 onwards */
-if (qemuCaps->version >= 2002000)
-virQEMUCapsSet(qemuCaps, QEMU_CAPS_MACHINE_VMPORT_OPT);


...and not version comparison, but we'll need something better in QMP
for 2.3 (which is rather late, since we missed 2.3-rc3) if you can't
come up with anything better for learning whether vmport is supported.



Ouch, I missed that.  But that's something we need for more than just
vmport attribute, but also all other machine-specific ones :(

I still think this might go in, though.


signature.asc
Description: PGP signature


Re: [Qemu-devel] [V9fs-developer] [Bug 1336794] Re: 9pfs does not honor open file handles on unlinked files

2015-04-14 Thread Eric Van Hensbergen
That patch looks fine by me.  Happy to put it in the queue.  Thanks Al.

On Tue, Apr 14, 2015 at 11:07 AM Al Viro  wrote:

> On Mon, Apr 13, 2015 at 04:05:28PM +, Eric Van Hensbergen wrote:
> > Well, technically fid 3 isn't 'open', only fid 2 is open - at least
> > according to the protocol.  fid 3 and fid 2 are both clones of fid 1.
> > However, thanks for the alternative workaround.  If you get a chance, can
> > you check that my change to the client to partially fix this for the
> other
> > servers doesn't break nfs-ganesha:
> >
> >
> https://github.com/ericvh/linux/commit/fddc7721d6d19e4e6be4905f37ade5b0521f4ed5
>
> BTW, what the hell is going on in v9fs_vfs_mknod() and v9fs_vfs_link()?
> You allocate 4Kb buffer, sprintf into it ("b %u %u", "c %u %u", or "%d\n")
> feed it to v9fs_vfs_mkspecial() and immediately free it.  What's wrong with
> a local array of char?  In the worst case it needs to be char name[24] -
> surely, we are not so tight on stack that extra 16 bytes (that array
> instead
> of a pointer) would drive us over the cliff?
>
> IOW, do you have any problem with this:
> diff --git a/fs/9p/vfs_inode.c b/fs/9p/vfs_inode.c
> index 703342e..cda68f7 100644
> --- a/fs/9p/vfs_inode.c
> +++ b/fs/9p/vfs_inode.c
> @@ -1370,6 +1370,8 @@ v9fs_vfs_symlink(struct inode *dir, struct dentry
> *dentry, const char *symname)
> return v9fs_vfs_mkspecial(dir, dentry, P9_DMSYMLINK, symname);
>  }
>
> +#define U32_MAX_DIGITS 10
> +
>  /**
>   * v9fs_vfs_link - create a hardlink
>   * @old_dentry: dentry for file to link to
> @@ -1383,7 +1385,7 @@ v9fs_vfs_link(struct dentry *old_dentry, struct
> inode *dir,
>   struct dentry *dentry)
>  {
> int retval;
> -   char *name;
> +   char name[1 + U32_MAX_DIGITS + 2]; /* sign + number + \n + \0 */
> struct p9_fid *oldfid;
>
> p9_debug(P9_DEBUG_VFS, " %lu,%pd,%pd\n",
> @@ -1393,20 +1395,12 @@ v9fs_vfs_link(struct dentry *old_dentry, struct
> inode *dir,
> if (IS_ERR(oldfid))
> return PTR_ERR(oldfid);
>
> -   name = __getname();
> -   if (unlikely(!name)) {
> -   retval = -ENOMEM;
> -   goto clunk_fid;
> -   }
> -
> sprintf(name, "%d\n", oldfid->fid);
> retval = v9fs_vfs_mkspecial(dir, dentry, P9_DMLINK, name);
> -   __putname(name);
> if (!retval) {
> v9fs_refresh_inode(oldfid, d_inode(old_dentry));
> v9fs_invalidate_inode_attr(dir);
> }
> -clunk_fid:
> p9_client_clunk(oldfid);
> return retval;
>  }
> @@ -1425,7 +1419,7 @@ v9fs_vfs_mknod(struct inode *dir, struct dentry
> *dentry, umode_t mode, dev_t rde
>  {
> struct v9fs_session_info *v9ses = v9fs_inode2v9ses(dir);
> int retval;
> -   char *name;
> +   char name[2 + U32_MAX_DIGITS + 1 + U32_MAX_DIGITS + 1];
> u32 perm;
>
> p9_debug(P9_DEBUG_VFS, " %lu,%pd mode: %hx MAJOR: %u MINOR: %u\n",
> @@ -1435,26 +1429,16 @@ v9fs_vfs_mknod(struct inode *dir, struct dentry
> *dentry, umode_t mode, dev_t rde
> if (!new_valid_dev(rdev))
> return -EINVAL;
>
> -   name = __getname();
> -   if (!name)
> -   return -ENOMEM;
> /* build extension */
> if (S_ISBLK(mode))
> sprintf(name, "b %u %u", MAJOR(rdev), MINOR(rdev));
> else if (S_ISCHR(mode))
> sprintf(name, "c %u %u", MAJOR(rdev), MINOR(rdev));
> -   else if (S_ISFIFO(mode))
> -   *name = 0;
> -   else if (S_ISSOCK(mode))
> +   else
> *name = 0;
> -   else {
> -   __putname(name);
> -   return -EINVAL;
> -   }
>
> perm = unixmode2p9mode(v9ses, mode);
> retval = v9fs_vfs_mkspecial(dir, dentry, perm, name);
> -   __putname(name);
>
> return retval;
>  }
>


Re: [Qemu-devel] [V9fs-developer] [Bug 1336794] Re: 9pfs does not honor open file handles on unlinked files

2015-04-14 Thread Al Viro
On Mon, Apr 13, 2015 at 04:05:28PM +, Eric Van Hensbergen wrote:
> Well, technically fid 3 isn't 'open', only fid 2 is open - at least
> according to the protocol.  fid 3 and fid 2 are both clones of fid 1.
> However, thanks for the alternative workaround.  If you get a chance, can
> you check that my change to the client to partially fix this for the other
> servers doesn't break nfs-ganesha:
> 
> https://github.com/ericvh/linux/commit/fddc7721d6d19e4e6be4905f37ade5b0521f4ed5

BTW, what the hell is going on in v9fs_vfs_mknod() and v9fs_vfs_link()?
You allocate 4Kb buffer, sprintf into it ("b %u %u", "c %u %u", or "%d\n")
feed it to v9fs_vfs_mkspecial() and immediately free it.  What's wrong with
a local array of char?  In the worst case it needs to be char name[24] -
surely, we are not so tight on stack that extra 16 bytes (that array instead
of a pointer) would drive us over the cliff?

IOW, do you have any problem with this:
diff --git a/fs/9p/vfs_inode.c b/fs/9p/vfs_inode.c
index 703342e..cda68f7 100644
--- a/fs/9p/vfs_inode.c
+++ b/fs/9p/vfs_inode.c
@@ -1370,6 +1370,8 @@ v9fs_vfs_symlink(struct inode *dir, struct dentry 
*dentry, const char *symname)
return v9fs_vfs_mkspecial(dir, dentry, P9_DMSYMLINK, symname);
 }
 
+#define U32_MAX_DIGITS 10
+
 /**
  * v9fs_vfs_link - create a hardlink
  * @old_dentry: dentry for file to link to
@@ -1383,7 +1385,7 @@ v9fs_vfs_link(struct dentry *old_dentry, struct inode 
*dir,
  struct dentry *dentry)
 {
int retval;
-   char *name;
+   char name[1 + U32_MAX_DIGITS + 2]; /* sign + number + \n + \0 */
struct p9_fid *oldfid;
 
p9_debug(P9_DEBUG_VFS, " %lu,%pd,%pd\n",
@@ -1393,20 +1395,12 @@ v9fs_vfs_link(struct dentry *old_dentry, struct inode 
*dir,
if (IS_ERR(oldfid))
return PTR_ERR(oldfid);
 
-   name = __getname();
-   if (unlikely(!name)) {
-   retval = -ENOMEM;
-   goto clunk_fid;
-   }
-
sprintf(name, "%d\n", oldfid->fid);
retval = v9fs_vfs_mkspecial(dir, dentry, P9_DMLINK, name);
-   __putname(name);
if (!retval) {
v9fs_refresh_inode(oldfid, d_inode(old_dentry));
v9fs_invalidate_inode_attr(dir);
}
-clunk_fid:
p9_client_clunk(oldfid);
return retval;
 }
@@ -1425,7 +1419,7 @@ v9fs_vfs_mknod(struct inode *dir, struct dentry *dentry, 
umode_t mode, dev_t rde
 {
struct v9fs_session_info *v9ses = v9fs_inode2v9ses(dir);
int retval;
-   char *name;
+   char name[2 + U32_MAX_DIGITS + 1 + U32_MAX_DIGITS + 1];
u32 perm;
 
p9_debug(P9_DEBUG_VFS, " %lu,%pd mode: %hx MAJOR: %u MINOR: %u\n",
@@ -1435,26 +1429,16 @@ v9fs_vfs_mknod(struct inode *dir, struct dentry 
*dentry, umode_t mode, dev_t rde
if (!new_valid_dev(rdev))
return -EINVAL;
 
-   name = __getname();
-   if (!name)
-   return -ENOMEM;
/* build extension */
if (S_ISBLK(mode))
sprintf(name, "b %u %u", MAJOR(rdev), MINOR(rdev));
else if (S_ISCHR(mode))
sprintf(name, "c %u %u", MAJOR(rdev), MINOR(rdev));
-   else if (S_ISFIFO(mode))
-   *name = 0;
-   else if (S_ISSOCK(mode))
+   else
*name = 0;
-   else {
-   __putname(name);
-   return -EINVAL;
-   }
 
perm = unixmode2p9mode(v9ses, mode);
retval = v9fs_vfs_mkspecial(dir, dentry, perm, name);
-   __putname(name);
 
return retval;
 }



Re: [Qemu-devel] [libvirt] [PATCH 3/5] qemu: add QEMU_CAPS_MACHINE_VMPORT_OPT

2015-04-14 Thread Eric Blake
[adding qemu]

On 04/14/2015 09:58 AM, Marc-André Lureau wrote:
> Hi
> 
> On Tue, Apr 14, 2015 at 4:25 PM, Martin Kletzander 
> wrote:
> 
>> Is this not exposed in any way in QEMU?  Do we really need to use this
>> (what we're trying to avoid)?
>>
> 
> That works with the following change:
> 
> diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c
> index 768cef1..1b20a7f 100644
> --- a/src/qemu/qemu_capabilities.c
> +++ b/src/qemu/qemu_capabilities.c
> @@ -2510,6 +2510,7 @@ struct virQEMUCapsCommandLineProps {
> 
>  static struct virQEMUCapsCommandLineProps virQEMUCapsCommandLine[] = {
>  { "machine", "mem-merge", QEMU_CAPS_MEM_MERGE },
> +{ "machine", "vmport", QEMU_CAPS_MACHINE_VMPORT_OPT },

Ouch.  qemu commit 0a7cf21 fixes what would have been a regression in
2.3 at exposing "mem-merge" through query-command-line-options, but it
does NOT expose "vmport", which is a per-architecture option rather than
a generic -machine option.  Which means that even though qemu 2.2
(perhaps wrongly) advertised "vmport" for all machines (even when it was
not supported), 2.3 will not advertise it, and we are hoping for a
better solution in 2.4 for properly advertising vmport on an
as-appropriate basis.

Yes, we WANT to use QMP probing,...

>  { "drive", "discard", QEMU_CAPS_DRIVE_DISCARD },
>  { "realtime", "mlock", QEMU_CAPS_MLOCK },
>  { "boot-opts", "strict", QEMU_CAPS_BOOT_STRICT },
> @@ -3243,10 +3244,6 @@ virQEMUCapsInitQMPMonitor(virQEMUCapsPtr qemuCaps,
>  if (qemuCaps->version >= 1003000)
>  virQEMUCapsSet(qemuCaps, QEMU_CAPS_MACHINE_USB_OPT);
> 
> -/* vmport option is supported v2.2.0 onwards */
> -if (qemuCaps->version >= 2002000)
> -virQEMUCapsSet(qemuCaps, QEMU_CAPS_MACHINE_VMPORT_OPT);

...and not version comparison, but we'll need something better in QMP
for 2.3 (which is rather late, since we missed 2.3-rc3) if you can't
come up with anything better for learning whether vmport is supported.

-- 
Eric Blake   eblake redhat com+1-919-301-3266
Libvirt virtualization library http://libvirt.org



signature.asc
Description: OpenPGP digital signature


Re: [Qemu-devel] [PATCH 1/2] q35: implement SMRAM.D_LCK

2015-04-14 Thread Michael S. Tsirkin
On Tue, Apr 14, 2015 at 05:41:14PM +0200, Michael S. Tsirkin wrote:
> On Tue, Apr 14, 2015 at 03:12:39PM +0200, Gerd Hoffmann wrote:
> > Signed-off-by: Gerd Hoffmann 
> > ---
> >  hw/pci-host/q35.c | 17 -
> >  1 file changed, 16 insertions(+), 1 deletion(-)
> > 
> > diff --git a/hw/pci-host/q35.c b/hw/pci-host/q35.c
> > index 79bab15..9227489 100644
> > --- a/hw/pci-host/q35.c
> > +++ b/hw/pci-host/q35.c
> > @@ -268,6 +268,20 @@ static void mch_update_smram(MCHPCIState *mch)
> >  PCIDevice *pd = PCI_DEVICE(mch);
> >  bool h_smrame = (pd->config[MCH_HOST_BRIDGE_ESMRAMC] & 
> > MCH_HOST_BRIDGE_ESMRAMC_H_SMRAME);
> >  
> > +/* implement SMRAM.D_LCK */
> > +if (pd->config[MCH_HOST_BRIDGE_SMRAM] & MCH_HOST_BRIDGE_SMRAM_D_LCK) {
> > +pd->config[MCH_HOST_BRIDGE_SMRAM] &= ~MCH_HOST_BRIDGE_SMRAM_D_OPEN;
> > +
> > +pd->wmask[MCH_HOST_BRIDGE_SMRAM] &= ~MCH_HOST_BRIDGE_SMRAM_D_OPEN;
> > +pd->wmask[MCH_HOST_BRIDGE_SMRAM] &= ~MCH_HOST_BRIDGE_SMRAM_D_LCK;
> > +pd->wmask[MCH_HOST_BRIDGE_SMRAM] &= 
> > ~MCH_HOST_BRIDGE_SMRAM_G_SMRAME;
> > +pd->wmask[MCH_HOST_BRIDGE_SMRAM] &= 
> > ~MCH_HOST_BRIDGE_SMRAM_C_BASE_SEG_MASK;
> > +
> > +pd->wmask[MCH_HOST_BRIDGE_ESMRAMC] &= 
> > ~MCH_HOST_BRIDGE_ESMRAMC_H_SMRAME;
> > +pd->wmask[MCH_HOST_BRIDGE_ESMRAMC] &= 
> > ~MCH_HOST_BRIDGE_ESMRAMC_TSEG_SZ_MASK;
> > +pd->wmask[MCH_HOST_BRIDGE_ESMRAMC] &= 
> > ~MCH_HOST_BRIDGE_ESMRAMC_T_EN;
> 
> 
> I'd prefer a single statement:
>   pd->wmask[MCH_HOST_BRIDGE_SMRAM] &=
>   ~(MCH_HOST_BRIDGE_SMRAM_D_OPEN | MCH_HOST_BRIDGE_SMRAM_D_LCK | 
> ... )
> 
> > +}
> > +
> >  memory_region_transaction_begin();
> >  
> >  if (pd->config[MCH_HOST_BRIDGE_SMRAM] & SMRAM_D_OPEN) {
> > @@ -297,7 +311,6 @@ static void mch_write_config(PCIDevice *d,
> >  {
> >  MCHPCIState *mch = MCH_PCI_DEVICE(d);
> >  
> > -/* XXX: implement SMRAM.D_LOCK */
> >  pci_default_write_config(d, address, val, len);
> >  
> >  if (ranges_overlap(address, len, MCH_HOST_BRIDGE_PAM0,
> > @@ -351,6 +364,8 @@ static void mch_reset(DeviceState *qdev)
> >   MCH_HOST_BRIDGE_PCIEXBAR_DEFAULT);
> >  
> >  d->config[MCH_HOST_BRIDGE_SMRAM] = MCH_HOST_BRIDGE_SMRAM_DEFAULT;
> > +d->wmask[MCH_HOST_BRIDGE_SMRAM] = 0xff;
> 
> Is this right? I see a bunch of reserved bits etc there.
> 
> 
> > +d->wmask[MCH_HOST_BRIDGE_ESMRAMC] = 0xff;

And this mask seems not to match the spec, either.

> Doesn't this mean we need to reset this register now?
> 
> >  
> >  mch_update(mch);
> >  }
> 
> Don't you also need to clear D_LCK?
> 
> > -- 
> > 1.8.3.1



Re: [Qemu-devel] [PATCH 1/2] q35: implement SMRAM.D_LCK

2015-04-14 Thread Michael S. Tsirkin
On Tue, Apr 14, 2015 at 03:12:39PM +0200, Gerd Hoffmann wrote:
> Signed-off-by: Gerd Hoffmann 
> ---
>  hw/pci-host/q35.c | 17 -
>  1 file changed, 16 insertions(+), 1 deletion(-)
> 
> diff --git a/hw/pci-host/q35.c b/hw/pci-host/q35.c
> index 79bab15..9227489 100644
> --- a/hw/pci-host/q35.c
> +++ b/hw/pci-host/q35.c
> @@ -268,6 +268,20 @@ static void mch_update_smram(MCHPCIState *mch)
>  PCIDevice *pd = PCI_DEVICE(mch);
>  bool h_smrame = (pd->config[MCH_HOST_BRIDGE_ESMRAMC] & 
> MCH_HOST_BRIDGE_ESMRAMC_H_SMRAME);
>  
> +/* implement SMRAM.D_LCK */
> +if (pd->config[MCH_HOST_BRIDGE_SMRAM] & MCH_HOST_BRIDGE_SMRAM_D_LCK) {
> +pd->config[MCH_HOST_BRIDGE_SMRAM] &= ~MCH_HOST_BRIDGE_SMRAM_D_OPEN;
> +
> +pd->wmask[MCH_HOST_BRIDGE_SMRAM] &= ~MCH_HOST_BRIDGE_SMRAM_D_OPEN;
> +pd->wmask[MCH_HOST_BRIDGE_SMRAM] &= ~MCH_HOST_BRIDGE_SMRAM_D_LCK;
> +pd->wmask[MCH_HOST_BRIDGE_SMRAM] &= ~MCH_HOST_BRIDGE_SMRAM_G_SMRAME;
> +pd->wmask[MCH_HOST_BRIDGE_SMRAM] &= 
> ~MCH_HOST_BRIDGE_SMRAM_C_BASE_SEG_MASK;
> +
> +pd->wmask[MCH_HOST_BRIDGE_ESMRAMC] &= 
> ~MCH_HOST_BRIDGE_ESMRAMC_H_SMRAME;
> +pd->wmask[MCH_HOST_BRIDGE_ESMRAMC] &= 
> ~MCH_HOST_BRIDGE_ESMRAMC_TSEG_SZ_MASK;
> +pd->wmask[MCH_HOST_BRIDGE_ESMRAMC] &= ~MCH_HOST_BRIDGE_ESMRAMC_T_EN;


I'd prefer a single statement:
pd->wmask[MCH_HOST_BRIDGE_SMRAM] &=
~(MCH_HOST_BRIDGE_SMRAM_D_OPEN | MCH_HOST_BRIDGE_SMRAM_D_LCK | 
... )

> +}
> +
>  memory_region_transaction_begin();
>  
>  if (pd->config[MCH_HOST_BRIDGE_SMRAM] & SMRAM_D_OPEN) {
> @@ -297,7 +311,6 @@ static void mch_write_config(PCIDevice *d,
>  {
>  MCHPCIState *mch = MCH_PCI_DEVICE(d);
>  
> -/* XXX: implement SMRAM.D_LOCK */
>  pci_default_write_config(d, address, val, len);
>  
>  if (ranges_overlap(address, len, MCH_HOST_BRIDGE_PAM0,
> @@ -351,6 +364,8 @@ static void mch_reset(DeviceState *qdev)
>   MCH_HOST_BRIDGE_PCIEXBAR_DEFAULT);
>  
>  d->config[MCH_HOST_BRIDGE_SMRAM] = MCH_HOST_BRIDGE_SMRAM_DEFAULT;
> +d->wmask[MCH_HOST_BRIDGE_SMRAM] = 0xff;

Is this right? I see a bunch of reserved bits etc there.


> +d->wmask[MCH_HOST_BRIDGE_ESMRAMC] = 0xff;

Doesn't this mean we need to reset this register now?

>  
>  mch_update(mch);
>  }

Don't you also need to clear D_LCK?

> -- 
> 1.8.3.1



Re: [Qemu-devel] [PATCH 10/15] smbios: Add a function to directly add an entry

2015-04-14 Thread Corey Minyard
On 04/14/2015 01:31 AM, Michael S. Tsirkin wrote:
> On Mon, Apr 13, 2015 at 06:40:46PM +0200, Paolo Bonzini wrote:
>>
>> On 13/04/2015 18:34, Corey Minyard wrote:
> I made this the same as the ACPI code, which you have to have as a
> callback if you are adding it to a common SSDT.
 Not really I think.
>>> The AML functions require that you have a tree to attach what you are
>>> adding.  If you did your own SSDT, you wouldn't need a callback.  You
>>> could add a binary blob that gets put into the SSDT, but I think that
>>> would require adding some AML functions.
>> I very much prefer the callback idea.  Long term it could be used by
>> more devices and possibly it could be turned into an AMLProvider QOM
>> interface.  Then the ACPI builder could iterate on all QOM devices and
>> just ask which of them can provide some AML.
> Yes, that would make sense. Devices which have a static
> AML could provide a static AML property, with very little code,
> those that have dynamic AML - dynamic AML property with more code.
>
> But I don't see callbacks as a step in that direction -
> more like code that will have to be ripped out later.
>
> I was looking for ways to remove dependencies for this patchset,
> not add them.

I don't see a dependency difference between the two.  Either way we are
adding something to the ACPI code to store these things, and adding
something to i386 (and arm64, when that happens) to get these entries
from ACPI and add them to the firmware.

>> Also, tables are rebuilt when the firmware loads them, and handing in a
>> blob makes it harder to achieve this on-the-fly modification, compared
>> to callbacks.
>>
>> Paolo
> Well that's not true for smbios, is it?

Both tables are dynamically built at runtime.  The SMBIOS table is fixed
size, but the contents will vary depending on what the user specifies
about IPMI.

-corey



[Qemu-devel] [PATCH] m25p80: add missing blk_attach_dev_nofail

2015-04-14 Thread Paolo Bonzini
Of the block devices that poked into -drive options via drive_get_next,
m25p80 was the only one who also did not attach itself to the BlockBackend.

Since sd does it, and all other devices go through a "drive" property,
with this change all block backends attached to the guest will have a
non-NULL result for blk_get_attached_dev().

Signed-off-by: Paolo Bonzini 
---
 hw/block/m25p80.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/hw/block/m25p80.c b/hw/block/m25p80.c
index afe243b..728e384 100644
--- a/hw/block/m25p80.c
+++ b/hw/block/m25p80.c
@@ -629,6 +629,7 @@ static int m25p80_init(SSISlave *ss)
 if (dinfo) {
 DB_PRINT_L(0, "Binding to IF_MTD drive\n");
 s->blk = blk_by_legacy_dinfo(dinfo);
+blk_attach_dev_nofail(s->blk, s);
 
 /* FIXME: Move to late init */
 if (blk_read(s->blk, 0, s->storage,
-- 
2.3.5




  1   2   >