On Thu, Mar 08, 2018 at 22:07:35 +0300, Vladimir Sementsov-Ogievskiy wrote:
> 08.03.2018 21:56, Emilio G. Cota wrote:
> > * Binning happens only at print time, so that we retain the flexibility to
> > * choose the binning. This might not be ideal for workloads that do not
> > care
> > * much
On 03/08/2018 12:58 PM, Vladimir Sementsov-Ogievskiy wrote:
Hm. these numbers are actually boundary points of histogram
intervals, not intervals itself. And, wiki says "The bins are usually
specified as consecutive, non-overlapping intervals of a variable.",
so, intervals are bins.
So, what
08.03.2018 21:56, Emilio G. Cota wrote:
On Thu, Mar 08, 2018 at 14:42:29 +0300, Vladimir Sementsov-Ogievskiy wrote:
Hi Emilio!
Looked through qdist, if I understand correctly, it saves each added (with
different value) element. It is not effective for disk io timing - we'll
have too much
08.03.2018 21:21, Vladimir Sementsov-Ogievskiy wrote:
08.03.2018 21:14, Vladimir Sementsov-Ogievskiy wrote:
08.03.2018 20:31, Eric Blake wrote:
On 03/06/2018 09:32 AM, Stefan Hajnoczi wrote:
On Wed, Feb 07, 2018 at 03:50:36PM +0300, Vladimir
Sementsov-Ogievskiy wrote:
Introduce latency
On Thu, Mar 08, 2018 at 14:42:29 +0300, Vladimir Sementsov-Ogievskiy wrote:
> Hi Emilio!
>
> Looked through qdist, if I understand correctly, it saves each added (with
> different value) element. It is not effective for disk io timing - we'll
> have too much elements. In my approach, histogram
To be reused in nbd_co_send_sparse_read() in the following patch.
Signed-off-by: Vladimir Sementsov-Ogievskiy
---
nbd/server.c | 48
1 file changed, 24 insertions(+), 24 deletions(-)
diff --git a/nbd/server.c
Signed-off-by: Vladimir Sementsov-Ogievskiy
---
It's like an RFC. I'm not sure, but this place looks like a bug. Shouldn't
we chack client-closing even before nbd_client_receive_next_request() call?
nbd/server.c | 8
1 file changed, 4 insertions(+), 4
01 and 02 are splitted and updated "[PATCH] nbd/server: fix space read",
others are new.
Vladimir Sementsov-Ogievskiy (5):
nbd/server: move nbd_co_send_structured_error up
nbd/server: fix sparse read
nbd/server: fix: check client->closing before reply sending
nbd/server: refactor
nbd_trip has difficult logic of sending reply: it tries to use one
code path for all replies. It is ok for simple replies, but is not
comfortable for structured replies. Also, two types of error (and
corresponding message in local_err) - fatal (leading to disconnect)
and not-fatal (just to be sent
Split out request handling logic.
Signed-off-by: Vladimir Sementsov-Ogievskiy
---
nbd/server.c | 129 +++
1 file changed, 67 insertions(+), 62 deletions(-)
diff --git a/nbd/server.c b/nbd/server.c
index
08.03.2018 21:14, Vladimir Sementsov-Ogievskiy wrote:
08.03.2018 20:31, Eric Blake wrote:
On 03/06/2018 09:32 AM, Stefan Hajnoczi wrote:
On Wed, Feb 07, 2018 at 03:50:36PM +0300, Vladimir
Sementsov-Ogievskiy wrote:
Introduce latency histogram statics for block devices.
For each accounted
08.03.2018 20:31, Eric Blake wrote:
On 03/06/2018 09:32 AM, Stefan Hajnoczi wrote:
On Wed, Feb 07, 2018 at 03:50:36PM +0300, Vladimir
Sementsov-Ogievskiy wrote:
Introduce latency histogram statics for block devices.
For each accounted operation type latency region [0, +inf) is
divided into
On Wed, Mar 07, 2018 at 02:42:01PM +, Stefan Hajnoczi wrote:
> v3:
> * Rebase on qemu.git/master after AIO_WAIT_WHILE() was merged [Fam]
> v2:
> * Tackle the .ioeventfd_stop() vs vq handler race by removing the ioeventfd
>from a BH in the IOThread [Fam]
>
> There are several race
On Wed, Mar 07, 2018 at 05:27:45PM -0600, Eric Blake wrote:
> On 03/06/2018 02:48 PM, Stefan Hajnoczi wrote:
> > The blockdev-snapshot-sync command uses bdrv_append() to update all parents
> > to
> > point at the external snapshot node. This breaks BlockBackend's
> >
On 03/06/2018 09:32 AM, Stefan Hajnoczi wrote:
On Wed, Feb 07, 2018 at 03:50:36PM +0300, Vladimir Sementsov-Ogievskiy wrote:
Introduce latency histogram statics for block devices.
For each accounted operation type latency region [0, +inf) is
divided into subregions by several points. Then,
On 03/06/2018 10:00 AM, Stefan Hajnoczi wrote:
On Wed, Feb 07, 2018 at 03:50:35PM +0300, Vladimir Sementsov-Ogievskiy wrote:
v2:
01: add block_latency_histogram_clear()
02: fix spelling (sorry =()
some rewordings
remove histogram if latency parameter unspecified
Vladimir
On Wed, Mar 07, 2018 at 09:36:38PM +0100, Peter Lieven wrote:
> Am 06.03.2018 um 12:51 schrieb Stefan Hajnoczi:
> > On Tue, Feb 20, 2018 at 06:04:02PM +0100, Peter Lieven wrote:
> >> I remember we discussed a long time ago to limit the stack usage of all
> >> functions that are executed in a
On 08/03/2018 17:01, Michael S. Tsirkin wrote:
> On Wed, Mar 07, 2018 at 02:42:01PM +, Stefan Hajnoczi wrote:
>> v3:
>> * Rebase on qemu.git/master after AIO_WAIT_WHILE() was merged [Fam]
>> v2:
>> * Tackle the .ioeventfd_stop() vs vq handler race by removing the ioeventfd
>>from a BH in
On 03/08/2018 09:22 AM, Paolo Bonzini wrote:
TRIM requests should not need FUA since they're just advisory.
Still, while you argue that TRIM is advisory (which I agree), if it does
nothing, then you've (implicitly) honored FUA (that transaction didn't
affect persistent storage, so you didn't
On Wed, Mar 07, 2018 at 12:44:59PM +0100, Sergio Lopez wrote:
> Commit 5b2ffbe4d99843fd8305c573a100047a8c962327 ("virtio-blk: dataplane:
> notify guest as a batch") deferred guest notification to a BH in order
> batch notifications, with purpose of avoiding flooding the guest with
> interruptions.
On Wed, Mar 07, 2018 at 02:42:01PM +, Stefan Hajnoczi wrote:
> v3:
> * Rebase on qemu.git/master after AIO_WAIT_WHILE() was merged [Fam]
> v2:
> * Tackle the .ioeventfd_stop() vs vq handler race by removing the ioeventfd
>from a BH in the IOThread [Fam]
Acked-by: Michael S. Tsirkin
On 08/03/2018 15:45, Eric Blake wrote:
> On 03/08/2018 12:50 AM, Paolo Bonzini wrote:
>>> The NBD spec states that since trim requests can affect disk contents,
>>> then they should allow for FUA semantics just like writes for ensuring
>>> the disk has settled before returning. As
08.03.2018 18:17, Vladimir Sementsov-Ogievskiy wrote:
08.03.2018 14:50, Vladimir Sementsov-Ogievskiy wrote:
05.03.2018 22:47, Eric Blake wrote:
On 03/05/2018 12:04 PM, Vladimir Sementsov-Ogievskiy wrote:
In case of io error in nbd_co_send_sparse_read we should not
"goto reply:", as it is
08.03.2018 14:50, Vladimir Sementsov-Ogievskiy wrote:
05.03.2018 22:47, Eric Blake wrote:
On 03/05/2018 12:04 PM, Vladimir Sementsov-Ogievskiy wrote:
In case of io error in nbd_co_send_sparse_read we should not
"goto reply:", as it is fatal error and common behavior is
disconnect in this case.
On 03/08/2018 12:50 AM, Paolo Bonzini wrote:
The NBD spec states that since trim requests can affect disk contents,
then they should allow for FUA semantics just like writes for ensuring
the disk has settled before returning. As bdrv_[co_]pdiscard() does
not (yet?) support a flags argument,
Am 08.03.2018 um 13:50 schrieb Juan Quintela:
Peter Lieven wrote:
the current implementation submits up to 512 I/O requests in parallel
which is much to high especially for a background task.
This patch adds a maximum limit of 16 I/O requests that can
be submitted in parallel to
On 7 March 2018 at 11:25, Daniel P. Berrangé wrote:
> The following changes since commit f2bb2d14c2958f3f5aef456bd2cdb1ff99f4a562:
>
> Merge remote-tracking branch 'remotes/stefanha/tags/block-pull-request'
> into staging (2018-03-05 16:41:20 +)
>
> are available in
Peter Lieven wrote:
> the current implementation submits up to 512 I/O requests in parallel
> which is much to high especially for a background task.
> This patch adds a maximum limit of 16 I/O requests that can
> be submitted in parallel to avoid monopolizing the I/O device.
>
>
05.03.2018 22:47, Eric Blake wrote:
On 03/05/2018 12:04 PM, Vladimir Sementsov-Ogievskiy wrote:
In case of io error in nbd_co_send_sparse_read we should not
"goto reply:", as it is fatal error and common behavior is
disconnect in this case. We should not try to send client an
error reply,
Hi Emilio!
Looked through qdist, if I understand correctly, it saves each added
(with different value) element. It is not effective for disk io timing -
we'll have too much elements. In my approach, histogram don't grow, it
initially have several ranges and counts hits to each range.
Peter Lieven wrote:
> Reset the dirty bitmap before reading to make sure we don't miss
> any new data.
>
> Cc: qemu-sta...@nongnu.org
> Signed-off-by: Peter Lieven
Reviewed-by: Juan Quintela
Peter Lieven wrote:
> this patch makes the bulk phase of a block migration to take
> place before we start transferring ram. As the bulk block migration
> can take a long time its pointless to transfer ram during that phase.
>
> Signed-off-by: Peter Lieven
>
Am 08.03.2018 um 11:21 hat Daniel P. Berrangé geschrieben:
> On Wed, Mar 07, 2018 at 07:59:09PM +0100, Kevin Wolf wrote:
> > This series implements a minimal QMP command that allows to create an
> > image file on the protocol level or an image format on a given block
> > node.
> >
> > Eventually,
Peter Lieven (5):
migration: do not transfer ram during bulk storage migration
migration/block: reset dirty bitmap before read in bulk phase
migration/block: rename MAX_INFLIGHT_IO to MAX_IO_BUFFERS
migration/block: limit the number of parallel I/O requests
migration/block: compare only
this patch makes the bulk phase of a block migration to take
place before we start transferring ram. As the bulk block migration
can take a long time its pointless to transfer ram during that phase.
Signed-off-by: Peter Lieven
Reviewed-by: Stefan Hajnoczi
---
the current implementation submits up to 512 I/O requests in parallel
which is much to high especially for a background task.
This patch adds a maximum limit of 16 I/O requests that can
be submitted in parallel to avoid monopolizing the I/O device.
Signed-off-by: Peter Lieven
---
Reset the dirty bitmap before reading to make sure we don't miss
any new data.
Cc: qemu-sta...@nongnu.org
Signed-off-by: Peter Lieven
---
migration/block.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/migration/block.c b/migration/block.c
index
only read_done blocks are in the queued to be flushed to the migration
stream. submitted blocks are still in flight.
Signed-off-by: Peter Lieven
---
migration/block.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/migration/block.c b/migration/block.c
index
Am 08.03.2018 um 10:01 schrieb Fam Zheng:
On Thu, Mar 8, 2018 at 4:57 PM, Peter Lieven wrote:
Am 08.03.2018 um 02:28 schrieb Fam Zheng :
On Wed, 03/07 09:06, Peter Lieven wrote:
Hi,
while looking at the code I wonder if the blk_aio_preadv and the
On Wed, Mar 07, 2018 at 07:59:09PM +0100, Kevin Wolf wrote:
> This series implements a minimal QMP command that allows to create an
> image file on the protocol level or an image format on a given block
> node.
>
> Eventually, the interface is going to change to some kind of an async
> command
On Thu, Mar 8, 2018 at 4:57 PM, Peter Lieven wrote:
>
>
>> Am 08.03.2018 um 02:28 schrieb Fam Zheng :
>>
>>> On Wed, 03/07 09:06, Peter Lieven wrote:
>>> Hi,
>>>
>>> while looking at the code I wonder if the blk_aio_preadv and the
>>> bdrv_reset_dirty_bitmap order
> Am 08.03.2018 um 02:28 schrieb Fam Zheng :
>
>> On Wed, 03/07 09:06, Peter Lieven wrote:
>> Hi,
>>
>> while looking at the code I wonder if the blk_aio_preadv and the
>> bdrv_reset_dirty_bitmap order must
>> be swapped in mig_save_device_bulk:
>>
>>
42 matches
Mail list logo