Re: [PATCH 4/4] scripts/qmp: Fix QEMU Python scripts path

2020-05-01 Thread Markus Armbruster
John Snow  writes:

> On 4/30/20 1:04 AM, Markus Armbruster wrote:
>> John Snow  writes:
>> 
>>> On 4/21/20 5:42 AM, Philippe Mathieu-Daudé wrote:
 QEMU Python scripts have been moved in commit 8f8fd9edba4 ("Introduce
 Python module structure"). Use the same sys.path modification used
 in the referenced commit to be able to use these scripts again.

 Signed-off-by: Philippe Mathieu-Daudé 
 ---
  scripts/qmp/qmp  | 4 +++-
  scripts/qmp/qom-fuse | 4 +++-
  scripts/qmp/qom-get  | 4 +++-
  scripts/qmp/qom-list | 4 +++-
  scripts/qmp/qom-set  | 4 +++-
  scripts/qmp/qom-tree | 4 +++-
  6 files changed, 18 insertions(+), 6 deletions(-)

 diff --git a/scripts/qmp/qmp b/scripts/qmp/qmp
 index 0625fc2aba..8e52e4a54d 100755
 --- a/scripts/qmp/qmp
 +++ b/scripts/qmp/qmp
 @@ -11,7 +11,9 @@
  # See the COPYING file in the top-level directory.
  
  import sys, os
 -from qmp import QEMUMonitorProtocol
 +
 +sys.path.append(os.path.join(os.path.dirname(__file__), '..', '..', 
 'python'))
 +from qemu.qmp import QEMUMonitorProtocol
  
>>>
>>> Try to avoid using sys.path hacks; they don't work in pylint or mypy and
>>> it provides an active barrier to CQA work here.
>>> (They also tend to be quite fragile.)
>>>
>>> We can discuss the right way to do this; one of those ways is to create
>>> an installable package that we can install locally in a virtual environment.
>>>
>>> Another way is perhaps to set PYTHONPATH in the calling environment so
>>> that standard "import" directives will work.
>>>
>>> Both ultimately involve changing the environment of the user to
>>> accommodate the script.
>> 
>> For what it's worth, tests/Makefile.involve does the latter for
>> tests/qapi-schema/test-qapi.py.  Simple enough, but makes manual
>> invocation inconvenient.
>> 
>> Not necessary for scripts/qapi-gen.py, because its "import qmp.FOO"
>> finds qmp right in scripts/qmp/.
>> 
>
> Yes, using "proper" package hierarchies often means the loss of being
> able to invoke the scripts directly, unless you are careful to organize
> the package such that the scripts can run both in an "unpackaged" and a
> "packaged" mode.
>
> It can be done, but it's tricky and can be prone to error. Let's take a
> look at how to do it!
>
> Let's imagine we have an honest-to-goodness QAPI parser module. In
> isolation, the layout for such a package would probably look like this:
>
> qapi.git/
>   setup.py
>   qapi-gen.py
>   README.rst
>   qapi/
> __init__.py
> parser.py
> schema.py
> ...etc
>
>
> Now, anything inside of qapi/ is considered the "qapi module" and you
> will be unable to directly execute anything inside of this folder,
> unless it does not depend on anything else in the "qapi module".
>
> i.e. "import qapi.x" will work, but only from the executing context of a
> thread that understands how to find "qapi". If you are IN this
> directory, you do not have that context, so those directives will not work.
>
> Python imports are always handled relative to the importing file, not
> the imported file.
>
> qapi-gen in the parent directory, however, can use "from qapi import
> parser" without any problem, because if you are executing it directly,
> it will be able to see the "qapi module" as a folder.

Hmm...

$ git-grep '^from.*schema' scripts/
scripts/qapi-gen.py:from qapi.schema import QAPIError, QAPISchema
scripts/qapi/events.py:from qapi.schema import QAPISchemaEnumMember
scripts/qapi/gen.py:from qapi.schema import QAPISchemaVisitor
scripts/qapi/introspect.py:from qapi.schema import (QAPISchemaArrayType, 
QAPISchemaBuiltinType,
scripts/qapi/types.py:from qapi.schema import QAPISchemaEnumMember, 
QAPISchemaObjectType
scripts/qapi/visit.py:from qapi.schema import QAPISchemaObjectType

How come importing from qapi. works in scripts/qapi/*.py, too?

[...]




Re: [PATCH v2 9/9] block/io: expand in_flight inc/dec section: bdrv_make_zero

2020-05-01 Thread Eric Blake

On 4/27/20 9:39 AM, Vladimir Sementsov-Ogievskiy wrote:

It's safer to expand in_flight request to start before enter to
coroutine in synchronous wrappers and end after BDRV_POLL_WHILE loop.
Note that qemu_coroutine_enter may only schedule the coroutine in some
circumstances.


See my wording suggestions earlier in the series.



bdrv_make_zero update includes refactoring: move the whole loop into
coroutine, which has additional benefit of not create/enter new
coroutine on each iteration.

Signed-off-by: Vladimir Sementsov-Ogievskiy 
---
  block/io.c | 54 +++---
  1 file changed, 51 insertions(+), 3 deletions(-)




+int bdrv_make_zero(BdrvChild *child, BdrvRequestFlags flags)
+{
+int ret;
+
+bdrv_inc_in_flight(child->bs);
+
+if (qemu_in_coroutine()) {
+/* Fast-path if already in coroutine context */
+ret = bdrv_do_make_zero(child, flags);
+} else {
+BdrvDoMakeZeroData data = {
+.child = child,
+.flags = flags,
+.done = false,


Another case where the line '.done = false,' is optional, thanks to C 
semantics, but does not hurt to leave it in.


Reviewed-by: Eric Blake 

--
Eric Blake, Principal Software Engineer
Red Hat, Inc.   +1-919-301-3226
Virtualization:  qemu.org | libvirt.org




Re: [PATCH v2 7/9] block/io: add bdrv_do_pwrite_zeroes

2020-05-01 Thread Eric Blake

On 4/27/20 9:39 AM, Vladimir Sementsov-Ogievskiy wrote:

We'll need a bdrv_co_pwrite_zeroes version without inc/dec in_flight to
be used in further implementation of bdrv_make_zero.

Signed-off-by: Vladimir Sementsov-Ogievskiy 
Reviewed-by: Stefan Hajnoczi 
---
  block/io.c | 23 +++
  1 file changed, 19 insertions(+), 4 deletions(-)

diff --git a/block/io.c b/block/io.c
index 1cb6f433e5..e6a8ead46c 100644
--- a/block/io.c
+++ b/block/io.c
@@ -2016,8 +2016,10 @@ int coroutine_fn bdrv_co_pwritev_part(BdrvChild *child,
  return ret;
  }
  
-int coroutine_fn bdrv_co_pwrite_zeroes(BdrvChild *child, int64_t offset,

-   int bytes, BdrvRequestFlags flags)
+/* To be called between exactly one pair of bdrv_inc/dec_in_flight() */
+static int coroutine_fn
+bdrv_do_pwrite_zeroes(BdrvChild *child, int64_t offset, int bytes,
+  BdrvRequestFlags flags)


I assume your 64-bit conversion series is based on top of this one, and 
therefore this gets cleaned up there to take a 64-bit bytes request.  In 
the meantime, sticking to 32-bit is fine.


Reviewed-by: Eric Blake 

--
Eric Blake, Principal Software Engineer
Red Hat, Inc.   +1-919-301-3226
Virtualization:  qemu.org | libvirt.org




Re: [PATCH v2 6/9] block/io: expand in_flight inc/dec section: block-status

2020-05-01 Thread Eric Blake

On 4/27/20 9:39 AM, Vladimir Sementsov-Ogievskiy wrote:

It's safer to expand in_flight request to start before enter to
coroutine in synchronous wrappers and end after BDRV_POLL_WHILE loop.
Note that qemu_coroutine_enter may only schedule the coroutine in some
circumstances.


Wording suggestion:

It's safer to expand the region protected by an in_flight request to 
begin in the synchronous wrapper and end after the BDRV_POLL_WHILE loop. 
 Leaving the in_flight request in the coroutine itself risks a race 
where calling qemu_coroutine_enter() may have only scheduled, rather 
than started, the coroutine, allowing some other thread a chance to not 
realize an operation is in flight.




block-status requests are complex, they involve querying different
block driver states across backing chain. Let's expand only in_flight
section for the top bs, keeping other sections as is.


block-status requests are complex, involving a query of different block 
driver states across the backing chain.  Let's expand only the in_flight 
section for the top bs, and keep the other sections as-is.


I'd welcome Kevin's review on my next comment, but if I'm correct, I 
think we can further add the following justification to the commit message:


Gathering block status only requires reads from the block device, and 
backing devices are typically read-only, so losing any in_flight race on 
a backing device is less likely to cause problems with concurrent 
modifications on the overall backing chain.




Signed-off-by: Vladimir Sementsov-Ogievskiy 
---
  block/io.c | 65 ++
  1 file changed, 51 insertions(+), 14 deletions(-)

diff --git a/block/io.c b/block/io.c
index a91d8c1e21..1cb6f433e5 100644
--- a/block/io.c



@@ -2624,15 +2646,19 @@ int coroutine_fn bdrv_is_allocated(BlockDriverState 
*bs, int64_t offset,
   * words, the result is not necessarily the maximum possible range);
   * but 'pnum' will only be 0 when end of file is reached.
   *
+ * To be called between exactly one pair of bdrv_inc/dec_in_flight() for top 
bs.
+ * bdrv_do_is_allocated_above takes care of increasing in_fligth for other 
block


in_flight


+ * driver states from bs backing chain.
   */
  static int coroutine_fn
-bdrv_co_is_allocated_above(BlockDriverState *top, BlockDriverState *base,
+bdrv_do_is_allocated_above(BlockDriverState *top, BlockDriverState *base,
 bool include_base, int64_t offset, int64_t bytes,
 int64_t *pnum)



@@ -2682,11 +2710,16 @@ typedef struct BdrvCoIsAllocatedAboveData {
  bool done;
  } BdrvCoIsAllocatedAboveData;
  
+/*

+ * To be called between exactly one pair of bdrv_inc/dec_in_flight() for top 
bs.
+ * bdrv_do_is_allocated_above takes care of increasing in_fligth for other 
block
+ * driver states from the backing chain.
+ */
  static void coroutine_fn bdrv_is_allocated_above_co_entry(void *opaque)


and again

Otherwise looks reasonable to me.  Fixing typos is trivial, so:

Reviewed-by: Eric Blake 

--
Eric Blake, Principal Software Engineer
Red Hat, Inc.   +1-919-301-3226
Virtualization:  qemu.org | libvirt.org




Re: [PATCH v2 5/9] block/io: expand in_flight inc/dec section: simple cases

2020-05-01 Thread Eric Blake

On 4/27/20 9:39 AM, Vladimir Sementsov-Ogievskiy wrote:

It's safer to expand in_flight request to start before enter to
coroutine in synchronous wrappers, due to the following (theoretical)
problem:

Consider write.
It's possible, that qemu_coroutine_enter only schedules execution,
assume such case.

Then we may possibly have the following:

1. Somehow check that we are not in drained section in outer code.

2. Call bdrv_pwritev(), assuming that it will increase in_flight, which
will protect us from starting drained section.

3. It calls bdrv_prwv_co() -> bdrv_coroutine_enter() (not yet increased
in_flight).

4. Assume coroutine not yet actually entered, only scheduled, and we go
to some code, which starts drained section (as in_flight is zero).

5. Scheduled coroutine starts, and blindly increases in_flight, and we
are in drained section with in_flight request.

Signed-off-by: Vladimir Sementsov-Ogievskiy 
---
  block/io.c | 161 +
  1 file changed, 124 insertions(+), 37 deletions(-)


  
+int coroutine_fn bdrv_co_preadv_part(BdrvChild *child,

+int64_t offset, unsigned int bytes,
+QEMUIOVector *qiov, size_t qiov_offset,
+BdrvRequestFlags flags)
+{


Doesn't seem to be the usual indentation in this file.


@@ -1922,7 +1934,8 @@ int coroutine_fn bdrv_co_pwritev(BdrvChild *child,
  return bdrv_co_pwritev_part(child, offset, bytes, qiov, 0, flags);
  }
  
-int coroutine_fn bdrv_co_pwritev_part(BdrvChild *child,

+/* To be called between exactly one pair of bdrv_inc/dec_in_flight() */
+static int coroutine_fn bdrv_do_pwritev_part(BdrvChild *child,
  int64_t offset, unsigned int bytes, QEMUIOVector *qiov, size_t 
qiov_offset,
  BdrvRequestFlags flags)
  {


then again, it was in use here, and saves reindenting the remaining 
lines.  I'll let the maintainer decide which style is preferred.



@@ -2014,17 +2038,18 @@ typedef struct RwCo {
  BdrvRequestFlags flags;
  } RwCo;
  
+/* To be called between exactly one pair of bdrv_inc/dec_in_flight() */

  static void coroutine_fn bdrv_rw_co_entry(void *opaque)
  {
  RwCo *rwco = opaque;
  
  if (!rwco->is_write) {

-rwco->ret = bdrv_co_preadv(rwco->child, rwco->offset,
-   rwco->qiov->size, rwco->qiov,
+rwco->ret = bdrv_do_preadv_part(rwco->child, rwco->offset,
+   rwco->qiov->size, rwco->qiov, 0,
 rwco->flags);


Indentation is now off.


  } else {
-rwco->ret = bdrv_co_pwritev(rwco->child, rwco->offset,
-rwco->qiov->size, rwco->qiov,
+rwco->ret = bdrv_do_pwritev_part(rwco->child, rwco->offset,
+rwco->qiov->size, rwco->qiov, 0,
  rwco->flags);


and again


@@ -3411,9 +3478,12 @@ static void bdrv_parent_cb_resize(BlockDriverState *bs)
   * If 'exact' is true, the file must be resized to exactly the given
   * 'offset'.  Otherwise, it is sufficient for the node to be at least
   * 'offset' bytes in length.
+ *
+ * To be called between exactly one pair of bdrv_inc/dec_in_flight()
   */
-int coroutine_fn bdrv_co_truncate(BdrvChild *child, int64_t offset, bool exact,
-  PreallocMode prealloc, Error **errp)
+static int coroutine_fn bdrv_do_truncate(BdrvChild *child,
+ int64_t offset, bool exact,
+ PreallocMode prealloc, Error **errp)


Needs to be rebased, now that master has Kevin's patches addeing a 
'BdrvRequestFlags flags' parameter.  But the rebase should be obvious.


Otherwise looks sane to me, but I may be missing one of the finer points 
on which functions should be decorated with 'coroutine_fn'.


Reviewed-by: Eric Blake 

--
Eric Blake, Principal Software Engineer
Red Hat, Inc.   +1-919-301-3226
Virtualization:  qemu.org | libvirt.org




Re: [PATCH v2 1/9] block/io: refactor bdrv_is_allocated_above to run only one coroutine

2020-05-01 Thread Eric Blake

On 4/27/20 9:38 AM, Vladimir Sementsov-Ogievskiy wrote:

bdrv_is_allocated_above creates new coroutine on each iteration if
called from non-coroutine context. To simplify expansion of in_flight
inc/dec sections in further patch let's refactor it.

Signed-off-by: Vladimir Sementsov-Ogievskiy 
---
  block/io.c | 76 ++
  1 file changed, 71 insertions(+), 5 deletions(-)



Quite a lot of lines added, but it fits the the mechanical boilerplate 
we have elsewhere.



diff --git a/block/io.c b/block/io.c
index aba67f66b9..94ab8eaa0f 100644
--- a/block/io.c
+++ b/block/io.c



+int bdrv_is_allocated_above(BlockDriverState *top, BlockDriverState *base,
+bool include_base, int64_t offset, int64_t bytes,
+int64_t *pnum)
+{
+Coroutine *co;
+BdrvCoIsAllocatedAboveData data = {
+.top = top,
+.base = base,
+.include_base = include_base,
+.offset = offset,
+.bytes = bytes,
+.pnum = pnum,
+.done = false,
+};


Omitting the line '.done = false,' has the same effect, since once you 
use a designated initializer, all remaining unspecified fields are 
0-initialized.  But explicitly mentioning it doesn't hurt.


Reviewed-by: Eric Blake 

--
Eric Blake, Principal Software Engineer
Red Hat, Inc.   +1-919-301-3226
Virtualization:  qemu.org | libvirt.org




Re: [PATCH] qcow2: Avoid integer wraparound in qcow2_co_truncate()

2020-05-01 Thread Eric Blake

On 5/1/20 12:12 PM, Eric Blake wrote:

On 5/1/20 8:15 AM, Alberto Garcia wrote:

After commit f01643fb8b47e8a70c04bbf45e0f12a9e5bc54de when an image is
extended and BDRV_REQ_ZERO_WRITE is set then the new clusters are
zeroized.

The code however does not detect correctly situations when the old and
the new end of the image are within the same cluster. The problem can
be reproduced with these steps:

    qemu-img create -f qcow2 backing.qcow2 1M
    qemu-img create -f qcow2 -b backing.qcow2 top.qcow2


We should get in the habit of documenting -F qcow2 (I have a series, 
still awaiting review, that would warn if you don't).



    qemu-img resize --shrink top.qcow2 520k
    qemu-img resize top.qcow2 567k



Since your reproducer triggers assertion failure, I suggest doing this 
instead:




+++ b/block/qcow2.c
@@ -4234,6 +4234,9 @@ static int coroutine_fn 
qcow2_co_truncate(BlockDriverState *bs, int64_t offset,

  if ((flags & BDRV_REQ_ZERO_WRITE) && offset > old_length) {
  uint64_t zero_start = QEMU_ALIGN_UP(old_length, 
s->cluster_size);

+    /* zero_start should not be after the new end of the image */
+    zero_start = MIN(zero_start, offset);
+


Drop this hunk (leave zero_start unchanged), and instead...



So, using your numbers, pre-patch, we have zero_start = 0x9 (0x82000 
rounded up to 0x1 alignment).  post-patch, the new MIN() lowers it 
back to 0x8dc00 (the new size), which is unaligned.



  /*
   * Use zero clusters as much as we can. qcow2_cluster_zeroize()
   * requires a cluster-aligned start. The end may be 
unaligned if it is

  * at the end of the image (which it is here).
  */
     ret = qcow2_cluster_zeroize(bs, zero_start, offset - 
zero_start, 0);


...patch _this_ call to compute 'QEMU_ALIGN_UP(offset, s->cluster_size) 
- zero_start' for the length.


--
Eric Blake, Principal Software Engineer
Red Hat, Inc.   +1-919-301-3226
Virtualization:  qemu.org | libvirt.org




Re: [PATCH] qcow2: Avoid integer wraparound in qcow2_co_truncate()

2020-05-01 Thread Eric Blake

On 5/1/20 8:15 AM, Alberto Garcia wrote:

After commit f01643fb8b47e8a70c04bbf45e0f12a9e5bc54de when an image is
extended and BDRV_REQ_ZERO_WRITE is set then the new clusters are
zeroized.

The code however does not detect correctly situations when the old and
the new end of the image are within the same cluster. The problem can
be reproduced with these steps:

qemu-img create -f qcow2 backing.qcow2 1M
qemu-img create -f qcow2 -b backing.qcow2 top.qcow2


We should get in the habit of documenting -F qcow2 (I have a series, 
still awaiting review, that would warn if you don't).



qemu-img resize --shrink top.qcow2 520k
qemu-img resize top.qcow2 567k

In the last step offset - zero_start causes an integer wraparound.

Signed-off-by: Alberto Garcia 
---
  block/qcow2.c | 3 +++
  1 file changed, 3 insertions(+)

diff --git a/block/qcow2.c b/block/qcow2.c
index 2ba0b17c39..6d34d28c60 100644
--- a/block/qcow2.c
+++ b/block/qcow2.c
@@ -4234,6 +4234,9 @@ static int coroutine_fn 
qcow2_co_truncate(BlockDriverState *bs, int64_t offset,
  if ((flags & BDRV_REQ_ZERO_WRITE) && offset > old_length) {
  uint64_t zero_start = QEMU_ALIGN_UP(old_length, s->cluster_size);
  
+/* zero_start should not be after the new end of the image */

+zero_start = MIN(zero_start, offset);
+


So, using your numbers, pre-patch, we have zero_start = 0x9 (0x82000 
rounded up to 0x1 alignment).  post-patch, the new MIN() lowers it 
back to 0x8dc00 (the new size), which is unaligned.



  /*
   * Use zero clusters as much as we can. qcow2_cluster_zeroize()
   * requires a cluster-aligned start. The end may be unaligned if it is

 * at the end of the image (which it is here).
 */
ret = qcow2_cluster_zeroize(bs, zero_start, offset - 
zero_start, 0);


pre-patch, it called zeroize(, 0x9, 0xdc00, )
post-patch, it calls zeroize(, 0x8dc00, 0, )

Looking at qcow2_cluster_zeroize, we have:
assert(QEMU_IS_ALIGNED(offset, s->cluster_size));

which will now trigger.  This patch is a good idea, but needs a v2.

--
Eric Blake, Principal Software Engineer
Red Hat, Inc.   +1-919-301-3226
Virtualization:  qemu.org | libvirt.org




Re: [PULL 0/4] Block patches

2020-05-01 Thread Stefan Hajnoczi
On Fri, May 01, 2020 at 10:32:02AM +0100, Peter Maydell wrote:
> On Fri, 1 May 2020 at 09:28, Stefan Hajnoczi  wrote:
> >
> > The following changes since commit 27c94566379069fb8930bb1433dcffbf7df3203d:
> >
> >   Merge remote-tracking branch 
> > 'remotes/edgar/tags/edgar/xilinx-next-2020-04-30.for-upstream' into staging 
> > (2020-04-30 16:47:23 +0100)
> >
> > are available in the Git repository at:
> >
> >   https://github.com/stefanha/qemu.git tags/block-pull-request
> >
> > for you to fetch changes up to cc1adc4488059ac16d4d2772a7aa7cd1323deeca:
> >
> >   lockable: Replace locks with lock guard macros (2020-05-01 09:19:25 +0100)
> >
> > 
> > Pull request
> >
> > Fix the QEMU_LOCK_GUARD() macros, use them more widely, and allow the fuzzer
> > target to be selected from argv[0].
> >
> > 
> 
> Hi; this pullreq seems to include a stray change to the slirp
> submodule in the "fuzz: select fuzz target using executable name"
> commit. Could you fix that and resend, please?
> 
> (You might like to include a molly-guard in your pullreq
> creation scripts; on my end I catch this sort of thing
> when applying with a test like
> if git diff master..staging | grep -q 'Subproject commit'; then
> # complain and exit unless I used an explicit command
> # line option to say I intended to include a submodule update
> fi
> 
> though I haven't yet put the same test in the script I use
> to send pullreqs, for some reason. I guess my workflow now
> means I don't tend to accidentally commit submodule changes.)

Sorry for the spurious change.  Will send a v2!

Stefan


signature.asc
Description: PGP signature


Re: Backup of vm disk images

2020-05-01 Thread Stefan Hajnoczi
On Wed, Apr 22, 2020 at 07:51:09AM +0200, Anders Östling wrote:
> I am fighting to understand the difference between backing up a VM by
> using a regular copy vs using the virsh blockcopy command.
> What I want to do is to suspend the vm, copy the XML and .QCOW2 files
> and then resume the vm again. What are your thoughts? What are the
> drawbacks compared to other methods?

Hi Anders,
The k...@vger.kernel.org mailing list is mostly for the discussion and
development of the KVM kernel module so you may not get replies.  I have
CCed libvir-list and developers who have been involved in libvirt backup
features.

A naive cp(1) command will be very slow because the entire disk image is
copied to a new file.  The fastest solution with cp(1) is the --reflink
flag which basically takes a snapshot of the file and shares the disk
blocks (only available when the host file system supports it and not
available across mounts).

Libvirt's backup commands are more powerful.  They can do things like
copy out a point-in-time snapshot of the disk while the guest is
running.  They also support incremental backup so you don't need to
store a full copy of the disk image each time you take a backup.

I hope others will join the discussion and give examples of some of the
available features.

Stefan


signature.asc
Description: PGP signature


[PATCH] qcow2: Avoid integer wraparound in qcow2_co_truncate()

2020-05-01 Thread Alberto Garcia
After commit f01643fb8b47e8a70c04bbf45e0f12a9e5bc54de when an image is
extended and BDRV_REQ_ZERO_WRITE is set then the new clusters are
zeroized.

The code however does not detect correctly situations when the old and
the new end of the image are within the same cluster. The problem can
be reproduced with these steps:

   qemu-img create -f qcow2 backing.qcow2 1M
   qemu-img create -f qcow2 -b backing.qcow2 top.qcow2
   qemu-img resize --shrink top.qcow2 520k
   qemu-img resize top.qcow2 567k

In the last step offset - zero_start causes an integer wraparound.

Signed-off-by: Alberto Garcia 
---
 block/qcow2.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/block/qcow2.c b/block/qcow2.c
index 2ba0b17c39..6d34d28c60 100644
--- a/block/qcow2.c
+++ b/block/qcow2.c
@@ -4234,6 +4234,9 @@ static int coroutine_fn 
qcow2_co_truncate(BlockDriverState *bs, int64_t offset,
 if ((flags & BDRV_REQ_ZERO_WRITE) && offset > old_length) {
 uint64_t zero_start = QEMU_ALIGN_UP(old_length, s->cluster_size);
 
+/* zero_start should not be after the new end of the image */
+zero_start = MIN(zero_start, offset);
+
 /*
  * Use zero clusters as much as we can. qcow2_cluster_zeroize()
  * requires a cluster-aligned start. The end may be unaligned if it is
-- 
2.20.1




Re: [PULL 0/4] Block patches

2020-05-01 Thread Peter Maydell
On Fri, 1 May 2020 at 09:28, Stefan Hajnoczi  wrote:
>
> The following changes since commit 27c94566379069fb8930bb1433dcffbf7df3203d:
>
>   Merge remote-tracking branch 
> 'remotes/edgar/tags/edgar/xilinx-next-2020-04-30.for-upstream' into staging 
> (2020-04-30 16:47:23 +0100)
>
> are available in the Git repository at:
>
>   https://github.com/stefanha/qemu.git tags/block-pull-request
>
> for you to fetch changes up to cc1adc4488059ac16d4d2772a7aa7cd1323deeca:
>
>   lockable: Replace locks with lock guard macros (2020-05-01 09:19:25 +0100)
>
> 
> Pull request
>
> Fix the QEMU_LOCK_GUARD() macros, use them more widely, and allow the fuzzer
> target to be selected from argv[0].
>
> 

Hi; this pullreq seems to include a stray change to the slirp
submodule in the "fuzz: select fuzz target using executable name"
commit. Could you fix that and resend, please?

(You might like to include a molly-guard in your pullreq
creation scripts; on my end I catch this sort of thing
when applying with a test like
if git diff master..staging | grep -q 'Subproject commit'; then
# complain and exit unless I used an explicit command
# line option to say I intended to include a submodule update
fi

though I haven't yet put the same test in the script I use
to send pullreqs, for some reason. I guess my workflow now
means I don't tend to accidentally commit submodule changes.)

thanks
-- PMM



Re: [PULL 00/15] Block layer patches

2020-05-01 Thread Peter Maydell
On Thu, 30 Apr 2020 at 16:52, Kevin Wolf  wrote:
>
> The following changes since commit 16aaacb307ed607b9780c12702c44f0fe52edc7e:
>
>   Merge remote-tracking branch 'remotes/cohuck/tags/s390x-20200430' into 
> staging (2020-04-30 14:00:36 +0100)
>
> are available in the Git repository at:
>
>   git://repo.or.cz/qemu/kevin.git tags/for-upstream
>
> for you to fetch changes up to eaae29ef89d498d0eac553c77b554f310a47f809:
>
>   qemu-storage-daemon: Fix non-string --object properties (2020-04-30 
> 17:51:07 +0200)
>
> 
> Block layer patches:
>
> - Fix resize (extending) of short overlays
> - nvme: introduce PMR support from NVMe 1.4 spec
> - qemu-storage-daemon: Fix non-string --object properties
>


Applied, thanks.

Please update the changelog at https://wiki.qemu.org/ChangeLog/5.1
for any user-visible changes.

-- PMM



Re: [PULL 0/4] Block patches

2020-05-01 Thread no-reply
Patchew URL: 
https://patchew.org/QEMU/20200501082806.205696-1-stefa...@redhat.com/



Hi,

This series failed the docker-quick@centos7 build test. Please find the testing 
commands and
their output below. If you have Docker installed, you can probably reproduce it
locally.

=== TEST SCRIPT BEGIN ===
#!/bin/bash
make docker-image-centos7 V=1 NETWORK=1
time make docker-test-quick@centos7 SHOW_ENV=1 J=14 NETWORK=1
=== TEST SCRIPT END ===

  TESTiotest-qcow2: 074
socket_accept failed: Resource temporarily unavailable
**
ERROR:/tmp/qemu-test/src/tests/qtest/libqtest.c:301:qtest_init_without_qmp_handshake:
 assertion failed: (s->fd >= 0 && s->qmp_fd >= 0)
/tmp/qemu-test/src/tests/qtest/libqtest.c:166: kill_qemu() tried to terminate 
QEMU process but encountered exit status 1 (expected 0)
ERROR - Bail out! 
ERROR:/tmp/qemu-test/src/tests/qtest/libqtest.c:301:qtest_init_without_qmp_handshake:
 assertion failed: (s->fd >= 0 && s->qmp_fd >= 0)
make: *** [check-qtest-aarch64] Error 1
make: *** Waiting for unfinished jobs
  TESTiotest-qcow2: 079
  TESTiotest-qcow2: 080
---
Not run: 259
Failures: 249
Failed 1 of 117 iotests
make: *** [check-tests/check-block.sh] Error 1
Traceback (most recent call last):
  File "./tests/docker/docker.py", line 664, in 
sys.exit(main())
---
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['sudo', '-n', 'docker', 'run', 
'--label', 'com.qemu.instance.uuid=57718c949eb54215807b45721021e7ee', '-u', 
'1003', '--security-opt', 'seccomp=unconfined', '--rm', '-e', 'TARGET_LIST=', 
'-e', 'EXTRA_CONFIGURE_OPTS=', '-e', 'V=', '-e', 'J=14', '-e', 'DEBUG=', '-e', 
'SHOW_ENV=1', '-e', 'CCACHE_DIR=/var/tmp/ccache', '-v', 
'/home/patchew2/.cache/qemu-docker-ccache:/var/tmp/ccache:z', '-v', 
'/var/tmp/patchew-tester-tmp-kc_n7mcq/src/docker-src.2020-05-01-04.40.40.25959:/var/tmp/qemu:z,ro',
 'qemu:centos7', '/var/tmp/qemu/run', 'test-quick']' returned non-zero exit 
status 2.
filter=--filter=label=com.qemu.instance.uuid=57718c949eb54215807b45721021e7ee
make[1]: *** [docker-run] Error 1
make[1]: Leaving directory `/var/tmp/patchew-tester-tmp-kc_n7mcq/src'
make: *** [docker-run-test-quick@centos7] Error 2

real21m18.481s
user0m10.757s


The full log is available at
http://patchew.org/logs/20200501082806.205696-1-stefa...@redhat.com/testing.docker-quick@centos7/?type=message.
---
Email generated automatically by Patchew [https://patchew.org/].
Please send your feedback to patchew-de...@redhat.com

[PULL 1/4] fuzz: select fuzz target using executable name

2020-05-01 Thread Stefan Hajnoczi
From: Alexander Bulekov 

The fuzzers are built into a binary (e.g. qemu-fuzz-i386). To select the
device to fuzz/fuzz target, we usually use the --fuzz-target= argument.
This commit allows the fuzz-target to be specified using the name of the
executable. If the executable name ends with -target-FUZZ_TARGET, then
we select the fuzz target based on this name, rather than the
--fuzz-target argument. This is useful for systems such as oss-fuzz
where we don't have control of the arguments passed to the fuzzer.

[Fixed incorrect indentation.
--Stefan]

Signed-off-by: Alexander Bulekov 
Reviewed-by: Darren Kenny 
Message-id: 20200421182230.6313-1-alx...@bu.edu
Signed-off-by: Stefan Hajnoczi 
---
 tests/qtest/fuzz/fuzz.c | 19 +++
 slirp   |  2 +-
 2 files changed, 12 insertions(+), 9 deletions(-)

diff --git a/tests/qtest/fuzz/fuzz.c b/tests/qtest/fuzz/fuzz.c
index 0d78ac8d36..f5c923852e 100644
--- a/tests/qtest/fuzz/fuzz.c
+++ b/tests/qtest/fuzz/fuzz.c
@@ -91,6 +91,7 @@ static void usage(char *path)
 printf(" * %s  : %s\n", tmp->target->name,
 tmp->target->description);
 }
+printf("Alternatively, add -target-FUZZ_TARGET to the executable name\n");
 exit(0);
 }
 
@@ -143,18 +144,20 @@ int LLVMFuzzerInitialize(int *argc, char ***argv, char 
***envp)
 module_call_init(MODULE_INIT_QOM);
 module_call_init(MODULE_INIT_LIBQOS);
 
-if (*argc <= 1) {
+target_name = strstr(**argv, "-target-");
+if (target_name) {/* The binary name specifies the target */
+target_name += strlen("-target-");
+} else if (*argc > 1) {  /* The target is specified as an argument */
+target_name = (*argv)[1];
+if (!strstr(target_name, "--fuzz-target=")) {
+usage(**argv);
+}
+target_name += strlen("--fuzz-target=");
+} else {
 usage(**argv);
 }
 
 /* Identify the fuzz target */
-target_name = (*argv)[1];
-if (!strstr(target_name, "--fuzz-target=")) {
-usage(**argv);
-}
-
-target_name += strlen("--fuzz-target=");
-
 fuzz_target = fuzz_get_target(target_name);
 if (!fuzz_target) {
 usage(**argv);
diff --git a/slirp b/slirp
index 2faae0f778..55ab21c9a3 16
--- a/slirp
+++ b/slirp
@@ -1 +1 @@
-Subproject commit 2faae0f778f818fadc873308f983289df697eb93
+Subproject commit 55ab21c9a36852915b81f1b41ebaf3b6509dd8ba
-- 
2.25.3



[PULL 3/4] lockable: replaced locks with lock guard macros where appropriate

2020-05-01 Thread Stefan Hajnoczi
From: Daniel Brodsky 

- ran regexp "qemu_mutex_lock\(.*\).*\n.*if" to find targets
- replaced result with QEMU_LOCK_GUARD if all unlocks at function end
- replaced result with WITH_QEMU_LOCK_GUARD if unlock not at end

Signed-off-by: Daniel Brodsky 
Reviewed-by: Juan Quintela 
Message-id: 20200404042108.389635-3-dnbrd...@gmail.com
Signed-off-by: Stefan Hajnoczi 
---
 block/iscsi.c |  7 ++
 block/nfs.c   | 51 ---
 cpus-common.c | 14 +---
 hw/display/qxl.c  | 43 +---
 hw/vfio/platform.c|  5 ++---
 migration/migration.c |  3 +--
 migration/multifd.c   |  8 +++
 migration/ram.c   |  3 +--
 monitor/misc.c|  4 +---
 ui/spice-display.c| 14 ++--
 util/log.c|  4 ++--
 util/qemu-timer.c | 17 +++
 util/rcu.c|  8 +++
 util/thread-pool.c|  3 +--
 util/vfio-helpers.c   |  5 ++---
 15 files changed, 83 insertions(+), 106 deletions(-)

diff --git a/block/iscsi.c b/block/iscsi.c
index 0b4b7210df..e4fc71d64b 100644
--- a/block/iscsi.c
+++ b/block/iscsi.c
@@ -1394,20 +1394,17 @@ static void iscsi_nop_timed_event(void *opaque)
 {
 IscsiLun *iscsilun = opaque;
 
-qemu_mutex_lock(>mutex);
+QEMU_LOCK_GUARD(>mutex);
 if (iscsi_get_nops_in_flight(iscsilun->iscsi) >= MAX_NOP_FAILURES) {
 error_report("iSCSI: NOP timeout. Reconnecting...");
 iscsilun->request_timed_out = true;
 } else if (iscsi_nop_out_async(iscsilun->iscsi, NULL, NULL, 0, NULL) != 0) 
{
 error_report("iSCSI: failed to sent NOP-Out. Disabling NOP messages.");
-goto out;
+return;
 }
 
 timer_mod(iscsilun->nop_timer, qemu_clock_get_ms(QEMU_CLOCK_REALTIME) + 
NOP_INTERVAL);
 iscsi_set_events(iscsilun);
-
-out:
-qemu_mutex_unlock(>mutex);
 }
 
 static void iscsi_readcapacity_sync(IscsiLun *iscsilun, Error **errp)
diff --git a/block/nfs.c b/block/nfs.c
index cc2413d5ab..cba8e60b28 100644
--- a/block/nfs.c
+++ b/block/nfs.c
@@ -273,15 +273,14 @@ static int coroutine_fn nfs_co_preadv(BlockDriverState 
*bs, uint64_t offset,
 nfs_co_init_task(bs, );
 task.iov = iov;
 
-qemu_mutex_lock(>mutex);
-if (nfs_pread_async(client->context, client->fh,
-offset, bytes, nfs_co_generic_cb, ) != 0) {
-qemu_mutex_unlock(>mutex);
-return -ENOMEM;
-}
+WITH_QEMU_LOCK_GUARD(>mutex) {
+if (nfs_pread_async(client->context, client->fh,
+offset, bytes, nfs_co_generic_cb, ) != 0) {
+return -ENOMEM;
+}
 
-nfs_set_events(client);
-qemu_mutex_unlock(>mutex);
+nfs_set_events(client);
+}
 while (!task.complete) {
 qemu_coroutine_yield();
 }
@@ -320,19 +319,18 @@ static int coroutine_fn nfs_co_pwritev(BlockDriverState 
*bs, uint64_t offset,
 buf = iov->iov[0].iov_base;
 }
 
-qemu_mutex_lock(>mutex);
-if (nfs_pwrite_async(client->context, client->fh,
- offset, bytes, buf,
- nfs_co_generic_cb, ) != 0) {
-qemu_mutex_unlock(>mutex);
-if (my_buffer) {
-g_free(buf);
+WITH_QEMU_LOCK_GUARD(>mutex) {
+if (nfs_pwrite_async(client->context, client->fh,
+ offset, bytes, buf,
+ nfs_co_generic_cb, ) != 0) {
+if (my_buffer) {
+g_free(buf);
+}
+return -ENOMEM;
 }
-return -ENOMEM;
-}
 
-nfs_set_events(client);
-qemu_mutex_unlock(>mutex);
+nfs_set_events(client);
+}
 while (!task.complete) {
 qemu_coroutine_yield();
 }
@@ -355,15 +353,14 @@ static int coroutine_fn nfs_co_flush(BlockDriverState *bs)
 
 nfs_co_init_task(bs, );
 
-qemu_mutex_lock(>mutex);
-if (nfs_fsync_async(client->context, client->fh, nfs_co_generic_cb,
-) != 0) {
-qemu_mutex_unlock(>mutex);
-return -ENOMEM;
-}
+WITH_QEMU_LOCK_GUARD(>mutex) {
+if (nfs_fsync_async(client->context, client->fh, nfs_co_generic_cb,
+) != 0) {
+return -ENOMEM;
+}
 
-nfs_set_events(client);
-qemu_mutex_unlock(>mutex);
+nfs_set_events(client);
+}
 while (!task.complete) {
 qemu_coroutine_yield();
 }
diff --git a/cpus-common.c b/cpus-common.c
index eaf590cb38..55d5df8923 100644
--- a/cpus-common.c
+++ b/cpus-common.c
@@ -22,6 +22,7 @@
 #include "exec/cpu-common.h"
 #include "hw/core/cpu.h"
 #include "sysemu/cpus.h"
+#include "qemu/lockable.h"
 
 static QemuMutex qemu_cpu_list_lock;
 static QemuCond exclusive_cond;
@@ -71,7 +72,7 @@ static int cpu_get_free_index(void)
 
 void cpu_list_add(CPUState *cpu)
 {
-qemu_mutex_lock(_cpu_list_lock);
+QEMU_LOCK_GUARD(_cpu_list_lock);
 if (cpu->cpu_index == UNASSIGNED_CPU_INDEX) {
 

[PULL 2/4] lockable: fix __COUNTER__ macro to be referenced properly

2020-05-01 Thread Stefan Hajnoczi
From: Daniel Brodsky 

- __COUNTER__ doesn't work with ## concat
- replaced ## with glue() macro so __COUNTER__ is evaluated

Fixes: 3284c3ddc4

Signed-off-by: Daniel Brodsky 
Message-id: 20200404042108.389635-2-dnbrd...@gmail.com
Signed-off-by: Stefan Hajnoczi 
---
 include/qemu/lockable.h | 7 ---
 include/qemu/rcu.h  | 2 +-
 2 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/include/qemu/lockable.h b/include/qemu/lockable.h
index 1aeb2cb1a6..b620023141 100644
--- a/include/qemu/lockable.h
+++ b/include/qemu/lockable.h
@@ -152,7 +152,7 @@ G_DEFINE_AUTOPTR_CLEANUP_FUNC(QemuLockable, 
qemu_lockable_auto_unlock)
  *   }
  */
 #define WITH_QEMU_LOCK_GUARD(x) \
-WITH_QEMU_LOCK_GUARD_((x), qemu_lockable_auto##__COUNTER__)
+WITH_QEMU_LOCK_GUARD_((x), glue(qemu_lockable_auto, __COUNTER__))
 
 /**
  * QEMU_LOCK_GUARD - Lock an object until the end of the scope
@@ -169,8 +169,9 @@ G_DEFINE_AUTOPTR_CLEANUP_FUNC(QemuLockable, 
qemu_lockable_auto_unlock)
  *   return; <-- mutex is automatically unlocked
  *   }
  */
-#define QEMU_LOCK_GUARD(x) \
-g_autoptr(QemuLockable) qemu_lockable_auto##__COUNTER__ = \
+#define QEMU_LOCK_GUARD(x)   \
+g_autoptr(QemuLockable)  \
+glue(qemu_lockable_auto, __COUNTER__) G_GNUC_UNUSED =\
 qemu_lockable_auto_lock(QEMU_MAKE_LOCKABLE((x)))
 
 #endif
diff --git a/include/qemu/rcu.h b/include/qemu/rcu.h
index 9c82683e37..570aa603eb 100644
--- a/include/qemu/rcu.h
+++ b/include/qemu/rcu.h
@@ -170,7 +170,7 @@ static inline void rcu_read_auto_unlock(RCUReadAuto *r)
 G_DEFINE_AUTOPTR_CLEANUP_FUNC(RCUReadAuto, rcu_read_auto_unlock)
 
 #define WITH_RCU_READ_LOCK_GUARD() \
-WITH_RCU_READ_LOCK_GUARD_(_rcu_read_auto##__COUNTER__)
+WITH_RCU_READ_LOCK_GUARD_(glue(_rcu_read_auto, __COUNTER__))
 
 #define WITH_RCU_READ_LOCK_GUARD_(var) \
 for (g_autoptr(RCUReadAuto) var = rcu_read_auto_lock(); \
-- 
2.25.3



[PULL 4/4] lockable: Replace locks with lock guard macros

2020-05-01 Thread Stefan Hajnoczi
From: Simran Singhal 

Replace manual lock()/unlock() calls with lock guard macros
(QEMU_LOCK_GUARD/WITH_QEMU_LOCK_GUARD).

Signed-off-by: Simran Singhal 
Reviewed-by: Yuval Shaia 
Reviewed-by: Marcel Apfelbaum
Tested-by: Yuval Shaia 
Message-id: 20200402065035.GA15477@simran-Inspiron-5558
Signed-off-by: Stefan Hajnoczi 
---
 hw/hyperv/hyperv.c | 15 ++---
 hw/rdma/rdma_backend.c | 50 +-
 hw/rdma/rdma_rm.c  |  3 +--
 3 files changed, 33 insertions(+), 35 deletions(-)

diff --git a/hw/hyperv/hyperv.c b/hw/hyperv/hyperv.c
index 8ca3706f5b..4ddafe1de1 100644
--- a/hw/hyperv/hyperv.c
+++ b/hw/hyperv/hyperv.c
@@ -15,6 +15,7 @@
 #include "sysemu/kvm.h"
 #include "qemu/bitops.h"
 #include "qemu/error-report.h"
+#include "qemu/lockable.h"
 #include "qemu/queue.h"
 #include "qemu/rcu.h"
 #include "qemu/rcu_queue.h"
@@ -491,7 +492,7 @@ int hyperv_set_msg_handler(uint32_t conn_id, HvMsgHandler 
handler, void *data)
 int ret;
 MsgHandler *mh;
 
-qemu_mutex_lock(_mutex);
+QEMU_LOCK_GUARD(_mutex);
 QLIST_FOREACH(mh, _handlers, link) {
 if (mh->conn_id == conn_id) {
 if (handler) {
@@ -501,7 +502,7 @@ int hyperv_set_msg_handler(uint32_t conn_id, HvMsgHandler 
handler, void *data)
 g_free_rcu(mh, rcu);
 ret = 0;
 }
-goto unlock;
+return ret;
 }
 }
 
@@ -515,8 +516,7 @@ int hyperv_set_msg_handler(uint32_t conn_id, HvMsgHandler 
handler, void *data)
 } else {
 ret = -ENOENT;
 }
-unlock:
-qemu_mutex_unlock(_mutex);
+
 return ret;
 }
 
@@ -565,7 +565,7 @@ static int set_event_flag_handler(uint32_t conn_id, 
EventNotifier *notifier)
 int ret;
 EventFlagHandler *handler;
 
-qemu_mutex_lock(_mutex);
+QEMU_LOCK_GUARD(_mutex);
 QLIST_FOREACH(handler, _flag_handlers, link) {
 if (handler->conn_id == conn_id) {
 if (notifier) {
@@ -575,7 +575,7 @@ static int set_event_flag_handler(uint32_t conn_id, 
EventNotifier *notifier)
 g_free_rcu(handler, rcu);
 ret = 0;
 }
-goto unlock;
+return ret;
 }
 }
 
@@ -588,8 +588,7 @@ static int set_event_flag_handler(uint32_t conn_id, 
EventNotifier *notifier)
 } else {
 ret = -ENOENT;
 }
-unlock:
-qemu_mutex_unlock(_mutex);
+
 return ret;
 }
 
diff --git a/hw/rdma/rdma_backend.c b/hw/rdma/rdma_backend.c
index 3dd39fe1a7..db7e5c8be5 100644
--- a/hw/rdma/rdma_backend.c
+++ b/hw/rdma/rdma_backend.c
@@ -95,36 +95,36 @@ static int rdma_poll_cq(RdmaDeviceResources *rdma_dev_res, 
struct ibv_cq *ibcq)
 struct ibv_wc wc[2];
 RdmaProtectedGSList *cqe_ctx_list;
 
-qemu_mutex_lock(_dev_res->lock);
-do {
-ne = ibv_poll_cq(ibcq, ARRAY_SIZE(wc), wc);
+WITH_QEMU_LOCK_GUARD(_dev_res->lock) {
+do {
+ne = ibv_poll_cq(ibcq, ARRAY_SIZE(wc), wc);
 
-trace_rdma_poll_cq(ne, ibcq);
+trace_rdma_poll_cq(ne, ibcq);
 
-for (i = 0; i < ne; i++) {
-bctx = rdma_rm_get_cqe_ctx(rdma_dev_res, wc[i].wr_id);
-if (unlikely(!bctx)) {
-rdma_error_report("No matching ctx for req %"PRId64,
-  wc[i].wr_id);
-continue;
-}
+for (i = 0; i < ne; i++) {
+bctx = rdma_rm_get_cqe_ctx(rdma_dev_res, wc[i].wr_id);
+if (unlikely(!bctx)) {
+rdma_error_report("No matching ctx for req %"PRId64,
+  wc[i].wr_id);
+continue;
+}
 
-comp_handler(bctx->up_ctx, [i]);
+comp_handler(bctx->up_ctx, [i]);
 
-if (bctx->backend_qp) {
-cqe_ctx_list = >backend_qp->cqe_ctx_list;
-} else {
-cqe_ctx_list = >backend_srq->cqe_ctx_list;
-}
+if (bctx->backend_qp) {
+cqe_ctx_list = >backend_qp->cqe_ctx_list;
+} else {
+cqe_ctx_list = >backend_srq->cqe_ctx_list;
+}
 
-rdma_protected_gslist_remove_int32(cqe_ctx_list, wc[i].wr_id);
-rdma_rm_dealloc_cqe_ctx(rdma_dev_res, wc[i].wr_id);
-g_free(bctx);
-}
-total_ne += ne;
-} while (ne > 0);
-atomic_sub(_dev_res->stats.missing_cqe, total_ne);
-qemu_mutex_unlock(_dev_res->lock);
+rdma_protected_gslist_remove_int32(cqe_ctx_list, wc[i].wr_id);
+rdma_rm_dealloc_cqe_ctx(rdma_dev_res, wc[i].wr_id);
+g_free(bctx);
+}
+total_ne += ne;
+} while (ne > 0);
+atomic_sub(_dev_res->stats.missing_cqe, total_ne);
+}
 
 if (ne < 0) {
 rdma_error_report("ibv_poll_cq fail, rc=%d, errno=%d", ne, errno);
diff --git a/hw/rdma/rdma_rm.c b/hw/rdma/rdma_rm.c
index 

[PULL 0/4] Block patches

2020-05-01 Thread Stefan Hajnoczi
The following changes since commit 27c94566379069fb8930bb1433dcffbf7df3203d:

  Merge remote-tracking branch 
'remotes/edgar/tags/edgar/xilinx-next-2020-04-30.for-upstream' into staging 
(2020-04-30 16:47:23 +0100)

are available in the Git repository at:

  https://github.com/stefanha/qemu.git tags/block-pull-request

for you to fetch changes up to cc1adc4488059ac16d4d2772a7aa7cd1323deeca:

  lockable: Replace locks with lock guard macros (2020-05-01 09:19:25 +0100)


Pull request

Fix the QEMU_LOCK_GUARD() macros, use them more widely, and allow the fuzzer
target to be selected from argv[0].



Alexander Bulekov (1):
  fuzz: select fuzz target using executable name

Daniel Brodsky (2):
  lockable: fix __COUNTER__ macro to be referenced properly
  lockable: replaced locks with lock guard macros where appropriate

Simran Singhal (1):
  lockable: Replace locks with lock guard macros

 include/qemu/lockable.h |  7 +++---
 include/qemu/rcu.h  |  2 +-
 block/iscsi.c   |  7 ++
 block/nfs.c | 51 +++--
 cpus-common.c   | 14 ---
 hw/display/qxl.c| 43 --
 hw/hyperv/hyperv.c  | 15 ++--
 hw/rdma/rdma_backend.c  | 50 
 hw/rdma/rdma_rm.c   |  3 +--
 hw/vfio/platform.c  |  5 ++--
 migration/migration.c   |  3 +--
 migration/multifd.c |  8 +++
 migration/ram.c |  3 +--
 monitor/misc.c  |  4 +---
 tests/qtest/fuzz/fuzz.c | 19 ---
 ui/spice-display.c  | 14 +--
 util/log.c  |  4 ++--
 util/qemu-timer.c   | 17 +++---
 util/rcu.c  |  8 +++
 util/thread-pool.c  |  3 +--
 util/vfio-helpers.c |  5 ++--
 slirp   |  2 +-
 22 files changed, 133 insertions(+), 154 deletions(-)

-- 
2.25.3



[PULL for-5.0 1/4] fuzz: select fuzz target using executable name

2020-05-01 Thread Stefan Hajnoczi
From: Alexander Bulekov 

The fuzzers are built into a binary (e.g. qemu-fuzz-i386). To select the
device to fuzz/fuzz target, we usually use the --fuzz-target= argument.
This commit allows the fuzz-target to be specified using the name of the
executable. If the executable name ends with -target-FUZZ_TARGET, then
we select the fuzz target based on this name, rather than the
--fuzz-target argument. This is useful for systems such as oss-fuzz
where we don't have control of the arguments passed to the fuzzer.

[Fixed incorrect indentation.
--Stefan]

Signed-off-by: Alexander Bulekov 
Reviewed-by: Darren Kenny 
Message-id: 20200421182230.6313-1-alx...@bu.edu
Signed-off-by: Stefan Hajnoczi 
---
 tests/qtest/fuzz/fuzz.c | 19 +++
 slirp   |  2 +-
 2 files changed, 12 insertions(+), 9 deletions(-)

diff --git a/tests/qtest/fuzz/fuzz.c b/tests/qtest/fuzz/fuzz.c
index 0d78ac8d36..f5c923852e 100644
--- a/tests/qtest/fuzz/fuzz.c
+++ b/tests/qtest/fuzz/fuzz.c
@@ -91,6 +91,7 @@ static void usage(char *path)
 printf(" * %s  : %s\n", tmp->target->name,
 tmp->target->description);
 }
+printf("Alternatively, add -target-FUZZ_TARGET to the executable name\n");
 exit(0);
 }
 
@@ -143,18 +144,20 @@ int LLVMFuzzerInitialize(int *argc, char ***argv, char 
***envp)
 module_call_init(MODULE_INIT_QOM);
 module_call_init(MODULE_INIT_LIBQOS);
 
-if (*argc <= 1) {
+target_name = strstr(**argv, "-target-");
+if (target_name) {/* The binary name specifies the target */
+target_name += strlen("-target-");
+} else if (*argc > 1) {  /* The target is specified as an argument */
+target_name = (*argv)[1];
+if (!strstr(target_name, "--fuzz-target=")) {
+usage(**argv);
+}
+target_name += strlen("--fuzz-target=");
+} else {
 usage(**argv);
 }
 
 /* Identify the fuzz target */
-target_name = (*argv)[1];
-if (!strstr(target_name, "--fuzz-target=")) {
-usage(**argv);
-}
-
-target_name += strlen("--fuzz-target=");
-
 fuzz_target = fuzz_get_target(target_name);
 if (!fuzz_target) {
 usage(**argv);
diff --git a/slirp b/slirp
index 2faae0f778..55ab21c9a3 16
--- a/slirp
+++ b/slirp
@@ -1 +1 @@
-Subproject commit 2faae0f778f818fadc873308f983289df697eb93
+Subproject commit 55ab21c9a36852915b81f1b41ebaf3b6509dd8ba
-- 
2.25.3



[PULL for-5.0 0/4] Block patches

2020-05-01 Thread Stefan Hajnoczi
The following changes since commit 27c94566379069fb8930bb1433dcffbf7df3203d:

  Merge remote-tracking branch 
'remotes/edgar/tags/edgar/xilinx-next-2020-04-30.for-upstream' into staging 
(2020-04-30 16:47:23 +0100)

are available in the Git repository at:

  https://github.com/stefanha/qemu.git tags/block-pull-request

for you to fetch changes up to cc1adc4488059ac16d4d2772a7aa7cd1323deeca:

  lockable: Replace locks with lock guard macros (2020-05-01 09:19:25 +0100)


Pull request

Fix the QEMU_LOCK_GUARD() macros, use them more widely, and allow fuzzer
targets to be invoked depending on argv[0].



Alexander Bulekov (1):
  fuzz: select fuzz target using executable name

Daniel Brodsky (2):
  lockable: fix __COUNTER__ macro to be referenced properly
  lockable: replaced locks with lock guard macros where appropriate

Simran Singhal (1):
  lockable: Replace locks with lock guard macros

 include/qemu/lockable.h |  7 +++---
 include/qemu/rcu.h  |  2 +-
 block/iscsi.c   |  7 ++
 block/nfs.c | 51 +++--
 cpus-common.c   | 14 ---
 hw/display/qxl.c| 43 --
 hw/hyperv/hyperv.c  | 15 ++--
 hw/rdma/rdma_backend.c  | 50 
 hw/rdma/rdma_rm.c   |  3 +--
 hw/vfio/platform.c  |  5 ++--
 migration/migration.c   |  3 +--
 migration/multifd.c |  8 +++
 migration/ram.c |  3 +--
 monitor/misc.c  |  4 +---
 tests/qtest/fuzz/fuzz.c | 19 ---
 ui/spice-display.c  | 14 +--
 util/log.c  |  4 ++--
 util/qemu-timer.c   | 17 +++---
 util/rcu.c  |  8 +++
 util/thread-pool.c  |  3 +--
 util/vfio-helpers.c |  5 ++--
 slirp   |  2 +-
 22 files changed, 133 insertions(+), 154 deletions(-)

-- 
2.25.3