Re: [ovirt-users] very very bad iscsi performance

2020-07-20 Thread Philip Brown
AH! my apologies. It seemed very odd, so I reviewed, and discovered that I messed up my testing of direct lun. updated results are improved from my previous email, but not any better than going through normal storage domain. 18156: 61.714: IO Summary: 110396 ops, 1836.964 ops/s, (921/907 r/

Re: [ovirt-users] very very bad iscsi performance

2020-07-20 Thread Philip Brown
FYI, I just tried it with direct lun. it is as bad or worse. I dont know about that sg io vs qemu initiator, but here is the results. 15223: 62.824: IO Summary: 83751 ops, 1387.166 ops/s, (699/681 r/w), 2.7mb/s, 619us cpu/op, 281.4ms latency 15761: 62.268: IO Summary: 77610 ops, 1287.908 o

Re: [ovirt-users] very very bad iscsi performance

2020-07-20 Thread Philip Brown
yes I am testing small writes. "oltp workload" means, simulation of OLTP database access. You asked me to test the speed of iscsi from another host, which is very reasonable. So here are the results, run from another node in the ovirt cluster. Setup is using: - exact same vg device, exported v

Re: [RFC PATCH-for-5.1 v2] hw/ide: Avoid #DIV/0! FPU exception by setting CD-ROM sector count

2020-07-20 Thread John Snow
On 7/17/20 9:38 AM, Philippe Mathieu-Daudé wrote: libFuzzer found an undefined behavior (#DIV/0!) in ide_set_sector() when using a CD-ROM (reproducer available on the BugLink): UndefinedBehaviorSanitizer:DEADLYSIGNAL ==12163==ERROR: UndefinedBehaviorSanitizer: FPE on unknown address 0x561

Re: [ovirt-users] very very bad iscsi performance

2020-07-20 Thread Paolo Bonzini
Il lun 20 lug 2020, 23:42 Nir Soffer ha scritto: > I think you will get the best performance using direct LUN. Is direct LUN using the QEMU iSCSI initiator, or SG_IO, and if so is it using /dev/sg or has that been fixed? SG_IO is definitely not going to be the fastest, especially with /dev/sg.

Re: [ovirt-users] very very bad iscsi performance

2020-07-20 Thread Nir Soffer
On Mon, Jul 20, 2020 at 8:51 PM Philip Brown wrote: > > I'm trying to get optimal iscsi performance. We're a heavy iscsi shop, with > 10g net. > > I'mm experimenting with SSDs, and the performance in ovirt is way, way less > than I would have hoped. > More than an order of magnitude slower. > >

Re: [PATCH 3/4] io/channel-socket: implement non-blocking connect

2020-07-20 Thread Daniel P . Berrangé
On Mon, Jul 20, 2020 at 09:07:14PM +0300, Vladimir Sementsov-Ogievskiy wrote: > Utilize new socket API to make a non-blocking connect for inet sockets. > > Signed-off-by: Vladimir Sementsov-Ogievskiy > --- > include/io/channel-socket.h | 14 +++ > io/channel-socket.c | 74 +++

[PATCH 3/4] io/channel-socket: implement non-blocking connect

2020-07-20 Thread Vladimir Sementsov-Ogievskiy
Utilize new socket API to make a non-blocking connect for inet sockets. Signed-off-by: Vladimir Sementsov-Ogievskiy --- include/io/channel-socket.h | 14 +++ io/channel-socket.c | 74 + 2 files changed, 88 insertions(+) diff --git a/include/io/cha

[PATCH 2/4] qemu-sockets: implement non-blocking connect interface

2020-07-20 Thread Vladimir Sementsov-Ogievskiy
We are going to implement non-blocking connect in io/channel-socket. non-blocking connect includes three phases: 1. connect() call 2. wait until socket is ready 3. check result io/channel-socket has wait-on-socket API (qio_channel_yield(), qio_channel_wait()), so it's a good place fo

[PATCH 4/4] block/nbd: use non-blocking connect: fix vm hang on connect()

2020-07-20 Thread Vladimir Sementsov-Ogievskiy
This make nbd connection_co to yield during reconnects, so that reconnect doesn't hang up the main thread. This is very important in case of unavailable nbd server host: connect() call may take a long time, blocking the main thread (and due to reconnect, it will hang again and again with small gaps

[PATCH for-5.1? 0/4] non-blocking connect

2020-07-20 Thread Vladimir Sementsov-Ogievskiy
Hi! This fixes real problem (see 04). On the other hand it may be too much for 5.1, and it's not a degradation. So, up to you. It's based on "[PATCH for-5.1? 0/3] Fix nbd reconnect dead-locks", or in other words Based-on: <20200720090024.18186-1-vsement...@virtuozzo.com> Vladimir Sementsov-Ogievs

[PATCH 1/4] qemu-sockets: refactor inet_connect_addr

2020-07-20 Thread Vladimir Sementsov-Ogievskiy
We are going to publish inet_connect_addr to be used in separate. Let's move keep_alive handling to it. Pass the whole InetSocketAddress pointer, not only keep_alive, so that future external callers will not care about internals of InetSocketAddress. While being here, remove redundant inet_connect

Re: [PATCH v2 3/3] iotests: Test node/bitmap aliases during migration

2020-07-20 Thread Vladimir Sementsov-Ogievskiy
16.07.2020 16:53, Max Reitz wrote: Signed-off-by: Max Reitz --- tests/qemu-iotests/300 | 511 + tests/qemu-iotests/300.out | 5 + tests/qemu-iotests/group | 1 + 3 files changed, 517 insertions(+) create mode 100755 tests/qemu-iotests/300 cr

Re: [PATCH v2 2/3] iotests.py: Add wait_for_runstate()

2020-07-20 Thread Vladimir Sementsov-Ogievskiy
16.07.2020 16:53, Max Reitz wrote: Signed-off-by: Max Reitz --- tests/qemu-iotests/iotests.py | 4 1 file changed, 4 insertions(+) diff --git a/tests/qemu-iotests/iotests.py b/tests/qemu-iotests/iotests.py index 3590ed78a0..fb240a334c 100644 --- a/tests/qemu-iotests/iotests.py +++ b/tes

Re: [PATCH v2 1/3] migration: Add block-bitmap-mapping parameter

2020-07-20 Thread Vladimir Sementsov-Ogievskiy
16.07.2020 16:53, Max Reitz wrote: This migration parameter allows mapping block node names and bitmap names to aliases for the purpose of block dirty bitmap migration. This way, management tools can use different node and bitmap names on the source and destination and pass the mapping of how bi

Re: [PATCH v7 29/47] blockdev: Use CAF in external_snapshot_prepare()

2020-07-20 Thread Andrey Shinkevich
On 25.06.2020 18:21, Max Reitz wrote: This allows us to differentiate between filters and nodes with COW backing files: Filters cannot be used as overlays at all (for this function). Signed-off-by: Max Reitz --- blockdev.c | 7 ++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --g

Re: [PATCH RESEND] file-posix: Handle `EINVAL` fallocate return value

2020-07-20 Thread Antoine Damhet
On Mon, Jul 20, 2020 at 04:07:26PM +0200, Kevin Wolf wrote: > Am 17.07.2020 um 15:56 hat antoine.dam...@blade-group.com geschrieben: > > From: Antoine Damhet > > > > The `detect-zeroes=unmap` option may issue unaligned > > `FALLOC_FL_PUNCH_HOLE` requests, raw block devices can (and will) return >

Re: [PATCH v7 26/47] block: Improve get_allocated_file_size's default

2020-07-20 Thread Andrey Shinkevich
On 25.06.2020 18:21, Max Reitz wrote: There are two practical problems with bdrv_get_allocated_file_size()'s default right now: (1) For drivers with children, we should generally sum all their sizes instead of just passing the request through to bs->file. The latter is good for filters

Re: [PATCH v7 28/47] block/null: Implement bdrv_get_allocated_file_size

2020-07-20 Thread Andrey Shinkevich
On 25.06.2020 18:21, Max Reitz wrote: It is trivial, so we might as well do it. Signed-off-by: Max Reitz --- block/null.c | 7 +++ tests/qemu-iotests/153.out | 2 +- tests/qemu-iotests/184.out | 6 -- 3 files changed, 12 insertions(+), 3 deletions(-) diff --git a/blo

Re: [PATCH v7 27/47] blkverify: Use bdrv_sum_allocated_file_size()

2020-07-20 Thread Andrey Shinkevich
On 25.06.2020 18:21, Max Reitz wrote: blkverify is a filter, so bdrv_get_allocated_file_size()'s default implementation will return only the size of its filtered child. However, because both of its children are disk images, it makes more sense to sum both of their allocated sizes. Signed-off-by:

Re: [PATCH for-5.1 1/2] qcow2: Implement v2 zero writes with discard if possible

2020-07-20 Thread Nir Soffer
On Mon, Jul 20, 2020 at 4:18 PM Kevin Wolf wrote: > > qcow2 version 2 images don't support the zero flag for clusters, so for > write_zeroes requests, we return -ENOTSUP and get explicit zero buffer > writes. If the image doesn't have a backing file, we can do better: Just > discard the respective

Re: [PATCH for-5.1 2/2] iotests: Test sparseness for qemu-img convert -n

2020-07-20 Thread Nir Soffer
On Mon, Jul 20, 2020 at 4:18 PM Kevin Wolf wrote: > > Signed-off-by: Kevin Wolf > --- > tests/qemu-iotests/122 | 34 ++ > tests/qemu-iotests/122.out | 17 + > 2 files changed, 51 insertions(+) > > diff --git a/tests/qemu-iotests/122 b/tests/qem

Re: various iotests failures apparently due to overly optimistic timeout settings

2020-07-20 Thread John Snow
On 7/20/20 10:15 AM, Peter Maydell wrote: On Mon, 20 Jul 2020 at 15:12, John Snow wrote: On 7/20/20 6:46 AM, Kevin Wolf wrote: John, I think this is a result of your recent python/qemu/ changes that make failure of graceful shutdown an error rather than just silently falling back to SIGKILL.

Re: various iotests failures apparently due to overly optimistic timeout settings

2020-07-20 Thread Peter Maydell
On Mon, 20 Jul 2020 at 15:18, John Snow wrote: > > On 7/20/20 10:15 AM, Peter Maydell wrote: > > On Mon, 20 Jul 2020 at 15:12, John Snow wrote: > >> > >> On 7/20/20 6:46 AM, Kevin Wolf wrote: > >>> John, I think this is a result of your recent python/qemu/ changes that > >>> make failure of grace

Re: various iotests failures apparently due to overly optimistic timeout settings

2020-07-20 Thread Peter Maydell
On Mon, 20 Jul 2020 at 15:12, John Snow wrote: > > On 7/20/20 6:46 AM, Kevin Wolf wrote: > > John, I think this is a result of your recent python/qemu/ changes that > > make failure of graceful shutdown an error rather than just silently > > falling back to SIGKILL. > > > > Should the default time

Re: various iotests failures apparently due to overly optimistic timeout settings

2020-07-20 Thread John Snow
On 7/20/20 6:46 AM, Kevin Wolf wrote: Am 19.07.2020 um 14:07 hat Peter Maydell geschrieben: I just had a bunch of iotests fail on a freebsd VM test run. I think the machine the VM runs on is sometimes a bit heavily loaded for I/O, which means the VM can run slowly. This causes various over-optim

Re: [PATCH RESEND] file-posix: Handle `EINVAL` fallocate return value

2020-07-20 Thread Kevin Wolf
Am 17.07.2020 um 15:56 hat antoine.dam...@blade-group.com geschrieben: > From: Antoine Damhet > > The `detect-zeroes=unmap` option may issue unaligned > `FALLOC_FL_PUNCH_HOLE` requests, raw block devices can (and will) return > `EINVAL`, qemu should then write the zeroes to the blockdev instead o

[PATCH for-5.1 0/2] qemu-img convert -n: Keep qcow2 v2 target sparse

2020-07-20 Thread Kevin Wolf
Kevin Wolf (2): qcow2: Implement v2 zero writes with discard if possible iotests: Test sparseness for qemu-img convert -n block/qcow2-cluster.c | 9 - tests/qemu-iotests/122 | 34 ++ tests/qemu-iotests/122.out | 17 + 3 files c

[PATCH for-5.1 1/2] qcow2: Implement v2 zero writes with discard if possible

2020-07-20 Thread Kevin Wolf
qcow2 version 2 images don't support the zero flag for clusters, so for write_zeroes requests, we return -ENOTSUP and get explicit zero buffer writes. If the image doesn't have a backing file, we can do better: Just discard the respective clusters. This is relevant for 'qemu-img convert -O qcow2 -

[PATCH for-5.1 2/2] iotests: Test sparseness for qemu-img convert -n

2020-07-20 Thread Kevin Wolf
Signed-off-by: Kevin Wolf --- tests/qemu-iotests/122 | 34 ++ tests/qemu-iotests/122.out | 17 + 2 files changed, 51 insertions(+) diff --git a/tests/qemu-iotests/122 b/tests/qemu-iotests/122 index dfd1cd05d6..1112fc0730 100755 --- a/tests/qemu

Re: [PATCH 0/2] Fix for write sharing on luks raw images

2020-07-20 Thread Max Reitz
On 19.07.20 14:20, Maxim Levitsky wrote: > A rebase gone wrong, and I ended up allowing a luks image > to be opened at the same time by two VMs without any warnings/overrides. > > Fix that and also add an iotest to prevent this from happening. > > Best regards, > Maxim Levisky > > Maxim Le

[PATCH 12/16] hw/block/nvme: refactor NvmeRequest clearing

2020-07-20 Thread Klaus Jensen
From: Klaus Jensen Move clearing of the structure from "clear before use" to "clear after use". Signed-off-by: Klaus Jensen --- hw/block/nvme.c | 7 ++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/hw/block/nvme.c b/hw/block/nvme.c index e2932239c661..431f26c2f589 100644 --

[PATCH 15/16] hw/block/nvme: remove NvmeCmd parameter

2020-07-20 Thread Klaus Jensen
From: Klaus Jensen Keep a copy of the raw nvme command in the NvmeRequest and remove the now redundant NvmeCmd parameter. Signed-off-by: Klaus Jensen --- hw/block/nvme.c | 177 +--- hw/block/nvme.h | 1 + 2 files changed, 93 insertions(+), 85 delet

[PATCH 11/16] hw/block/nvme: be consistent about zeros vs zeroes

2020-07-20 Thread Klaus Jensen
From: Klaus Jensen The NVM Express specification generally uses 'zeroes' and not 'zeros', so let us align with it. Cc: Fam Zheng Signed-off-by: Klaus Jensen --- block/nvme.c | 4 ++-- hw/block/nvme.c | 8 include/block/nvme.h | 4 ++-- 3 files changed, 8 insertions(+), 8

[PATCH 13/16] hw/block/nvme: add a namespace reference in NvmeRequest

2020-07-20 Thread Klaus Jensen
From: Klaus Jensen Instead of passing around the NvmeNamespace, add it as a member in the NvmeRequest structure. Signed-off-by: Klaus Jensen --- hw/block/nvme.c | 21 ++--- hw/block/nvme.h | 1 + 2 files changed, 11 insertions(+), 11 deletions(-) diff --git a/hw/block/nvme.c

[PATCH 16/16] hw/block/nvme: use preallocated qsg/iov in nvme_dma_prp

2020-07-20 Thread Klaus Jensen
From: Klaus Jensen Since clean up of the request qsg/iov is now always done post-use, there is no need to use a stack-allocated qsg/iov in nvme_dma_prp. Signed-off-by: Klaus Jensen Acked-by: Keith Busch Reviewed-by: Maxim Levitsky --- hw/block/nvme.c | 18 ++ 1 file changed,

[PATCH 06/16] hw/block/nvme: pass request along for tracing

2020-07-20 Thread Klaus Jensen
From: Klaus Jensen Pass along the NvmeRequest in various functions since it is very useful for tracing. Signed-off-by: Klaus Jensen Reviewed-by: Maxim Levitsky --- hw/block/nvme.c | 67 +-- hw/block/trace-events | 1 + 2 files changed, 40 inserti

[PATCH 05/16] hw/block/nvme: refactor dma read/write

2020-07-20 Thread Klaus Jensen
From: Klaus Jensen Refactor the nvme_dma_{read,write}_prp functions into a common function taking a DMADirection parameter. Signed-off-by: Klaus Jensen Reviewed-by: Maxim Levitsky --- hw/block/nvme.c | 88 - 1 file changed, 43 insertions(+), 45

[PATCH 10/16] hw/block/nvme: add check for mdts

2020-07-20 Thread Klaus Jensen
From: Klaus Jensen Add 'mdts' device parameter to control the Maximum Data Transfer Size of the controller and check that it is respected. Signed-off-by: Klaus Jensen Reviewed-by: Maxim Levitsky --- hw/block/nvme.c | 32 ++-- hw/block/nvme.h | 1 + hw/

[PATCH 14/16] hw/block/nvme: consolidate qsg/iov clearing

2020-07-20 Thread Klaus Jensen
From: Klaus Jensen Always destroy the request qsg/iov at the end of request use. Signed-off-by: Klaus Jensen --- hw/block/nvme.c | 48 +--- 1 file changed, 17 insertions(+), 31 deletions(-) diff --git a/hw/block/nvme.c b/hw/block/nvme.c index 54cd20

[PATCH 09/16] hw/block/nvme: refactor request bounds checking

2020-07-20 Thread Klaus Jensen
From: Klaus Jensen Hoist bounds checking into its own function and check for wrap-around. Signed-off-by: Klaus Jensen Reviewed-by: Maxim Levitsky --- hw/block/nvme.c | 26 +- 1 file changed, 21 insertions(+), 5 deletions(-) diff --git a/hw/block/nvme.c b/hw/block/nvme

[PATCH 08/16] hw/block/nvme: verify validity of prp lists in the cmb

2020-07-20 Thread Klaus Jensen
From: Klaus Jensen Before this patch the device already supported PRP lists in the CMB, but it did not check for the validity of it nor announced the support in the Identify Controller data structure LISTS field. If some of the PRPs in a PRP list are in the CMB, then ALL entries must be there. T

[PATCH 02/16] hw/block/nvme: add mapping helpers

2020-07-20 Thread Klaus Jensen
From: Klaus Jensen Add nvme_map_addr, nvme_map_addr_cmb and nvme_addr_to_cmb helpers and use them in nvme_map_prp. This fixes a bug where in the case of a CMB transfer, the device would map to the buffer with a wrong length. Fixes: b2b2b67a00574 ("nvme: Add support for Read Data and Write Data

[PATCH 07/16] hw/block/nvme: add request mapping helper

2020-07-20 Thread Klaus Jensen
From: Klaus Jensen Introduce the nvme_map helper to remove some noise in the main nvme_rw function. Signed-off-by: Klaus Jensen Reviewed-by: Maxim Levitsky --- hw/block/nvme.c | 13 ++--- 1 file changed, 10 insertions(+), 3 deletions(-) diff --git a/hw/block/nvme.c b/hw/block/nvme.c

[PATCH 04/16] hw/block/nvme: remove redundant has_sg member

2020-07-20 Thread Klaus Jensen
From: Klaus Jensen Remove the has_sg member from NvmeRequest since it's redundant. Also, make sure the request iov is destroyed at completion time. Signed-off-by: Klaus Jensen Reviewed-by: Maxim Levitsky --- hw/block/nvme.c | 11 ++- hw/block/nvme.h | 1 - 2 files changed, 6 inserti

[PATCH 01/16] hw/block/nvme: memset preallocated requests structures

2020-07-20 Thread Klaus Jensen
From: Klaus Jensen This is preparatory to subsequent patches that change how QSGs/IOVs are handled. It is important that the qsg and iov members of the NvmeRequest are initially zeroed. Signed-off-by: Klaus Jensen Reviewed-by: Maxim Levitsky --- hw/block/nvme.c | 2 +- 1 file changed, 1 inser

[PATCH 00/16] hw/block/nvme: dma handling and address mapping cleanup

2020-07-20 Thread Klaus Jensen
From: Klaus Jensen This series consists of patches that refactors dma read/write and adds a number of address mapping helper functions. Based-on: <20200706061303.246057-1-...@irrelevant.dk> Klaus Jensen (16): hw/block/nvme: memset preallocated requests structures hw/block/nvme: add mapping

[PATCH 03/16] hw/block/nvme: replace dma_acct with blk_acct equivalent

2020-07-20 Thread Klaus Jensen
From: Klaus Jensen The QSG isn't always initialized, so accounting could be wrong. Issue a call to blk_acct_start instead with the size taken from the QSG or IOV depending on the kind of I/O. Signed-off-by: Klaus Jensen Reviewed-by: Maxim Levitsky --- hw/block/nvme.c | 5 - 1 file changed

Re: [RFC PATCH-for-5.1 v2] hw/ide: Avoid #DIV/0! FPU exception by setting CD-ROM sector count

2020-07-20 Thread Darren Kenny
On Friday, 2020-07-17 at 15:38:47 +02, Philippe Mathieu-Daudé wrote: > libFuzzer found an undefined behavior (#DIV/0!) in ide_set_sector() > when using a CD-ROM (reproducer available on the BugLink): > > UndefinedBehaviorSanitizer:DEADLYSIGNAL > ==12163==ERROR: UndefinedBehaviorSanitizer: FPE o

Re: various iotests failures apparently due to overly optimistic timeout settings

2020-07-20 Thread Kevin Wolf
Am 19.07.2020 um 14:07 hat Peter Maydell geschrieben: > I just had a bunch of iotests fail on a freebsd VM test run. > I think the machine the VM runs on is sometimes a bit heavily > loaded for I/O, which means the VM can run slowly. This causes > various over-optimistic timeouts in the iotest test

[PATCH for-5.1] block: fix bdrv_aio_cancel() for ENOMEDIUM requests

2020-07-20 Thread Stefan Hajnoczi
bdrv_aio_cancel() calls aio_poll() on the AioContext for the given I/O request until it has completed. ENOMEDIUM requests are special because there is no BlockDriverState when the drive has no medium! Define a .get_aio_context() function for BlkAioEmAIOCB requests so that bdrv_aio_cancel() can fin

Re: [RFC PATCH-for-5.1] hw/ide: Do not block for AIO while resetting a drive

2020-07-20 Thread Stefan Hajnoczi
On Fri, Jul 17, 2020 at 07:19:38PM +0200, Philippe Mathieu-Daudé wrote: > Last minute chat: > 19:01 f4bug: use bdrv_aio_cancel_async() if possible because it > won't block the current thread. > 19:02 f4bug: For example, in device emulation code where the guest > has requested to cancel an I/O r

Re: [PATCH v3 00/18] hw/block/nvme: bump to v1.3

2020-07-20 Thread Klaus Jensen
On Jul 6 08:12, Klaus Jensen wrote: > From: Klaus Jensen > > This adds mandatory features of NVM Express v1.3 to the emulated NVMe > device. > > > v3: > * hw/block/nvme: additional tracing > - Reverse logic in nvme_cid(). (Philippe) > - Move nvme_cid() and nvme_sqid() to source file.

[PATCH 1/3] block/nbd: allow drain during reconnect attempt

2020-07-20 Thread Vladimir Sementsov-Ogievskiy
It should be to reenter qio_channel_yield() on io/channel read/write path, so it's safe to reduce in_flight and allow attaching new aio context. And no problem to allow drain itself: connection attempt is not a guest request. Moreover, if remote server is down, we can hang in negotiation, blocking

[PATCH 2/3] block/nbd: on shutdown terminate connection attempt

2020-07-20 Thread Vladimir Sementsov-Ogievskiy
On shutdown nbd driver may be in a connecting state. We should shutdown it as well, otherwise we may hang in nbd_teardown_connection, waiting for conneciton_co to finish in BDRV_POLL_WHILE(bs, s->connection_co) loop if remote server is down. How to reproduce the dead lock: 1. Create nbd-fault-inj

Re: [PATCH 1/3] block/nbd: allow drain during reconnect attempt

2020-07-20 Thread Vladimir Sementsov-Ogievskiy
20.07.2020 12:00, Vladimir Sementsov-Ogievskiy wrote: It should be to reenter qio_channel_yield() on io/channel read/write should be safe I mean path, so it's safe to reduce in_flight and allow attaching new aio context. And no problem to allow drain itself: connection attempt is not a guest

[PATCH for-5.1? 0/3] Fix nbd reconnect dead-locks

2020-07-20 Thread Vladimir Sementsov-Ogievskiy
Hi all! I've found some dead-locks, which can be easily triggered on master branch with default nbd configuration (reconnect-delay is 0), here are fixes. 01-02 fix real dead-locks 03 - hm. I'm not sure that the problem is reachable on master, I've faced it in my development branch where I move i

[PATCH 3/3] block/nbd: nbd_co_reconnect_loop(): don't sleep if drained

2020-07-20 Thread Vladimir Sementsov-Ogievskiy
We try to go to wakeable sleep, so that, if drain begins it will break the sleep. But what if nbd_client_co_drain_begin() already called and s->drained is already true? We'll go to sleep, and drain will have to wait for the whole timeout. Let's improve it. Signed-off-by: Vladimir Sementsov-Ogievsk

Re: [PATCH v5 10/11] hw/arm: Wire up BMC boot flash for npcm750-evb and quanta-gsj

2020-07-20 Thread Markus Armbruster
Philippe Mathieu-Daudé writes: > On 7/17/20 10:27 AM, Philippe Mathieu-Daudé wrote: >> On 7/17/20 10:03 AM, Thomas Huth wrote: >>> On 17/07/2020 09.48, Philippe Mathieu-Daudé wrote: +Thomas >>> On 7/16/20 10:56 PM, Havard Skinnemoen wrote: > On Wed, Jul 15, 2020 at 1:54 PM Havard Sk