On Wed, 04/17 22:53, Maxim Levitsky wrote:
> Signed-off-by: Maxim Levitsky
> ---
> block/nvme.c | 80 ++
> block/trace-events | 2 ++
> 2 files changed, 82 insertions(+)
>
> diff --git a/block/nvme.c b/block/nvme.c
> index
On Wed, 04/17 22:53, Maxim Levitsky wrote:
> Signed-off-by: Maxim Levitsky
> ---
> block/nvme.c | 69 +++-
> block/trace-events | 1 +
> include/block/nvme.h | 19 +++-
> 3 files changed, 87 insertions(+), 2 deletions(-)
>
> diff --git
Patchew URL: https://patchew.org/QEMU/20190605213654.9785-1-ptosc...@redhat.com/
Hi,
This series seems to have some coding style problems. See output below for
more information:
Message-id: 20190605213654.9785-1-ptosc...@redhat.com
Type: series
Subject: [Qemu-devel] [PATCH v6] ssh: switch
Rewrite the implementation of the ssh block driver to use libssh instead
of libssh2. The libssh library has various advantages over libssh2:
- easier API for authentication (for example for using ssh-agent)
- easier API for known_hosts handling
- supports newer types of keys in known_hosts
Use
On 6/5/19 2:48 PM, Eric Blake wrote:
> This also made me wonder if we should start a deprecation clock to
> improve the nbd-server-start command to use SocketAddress instead of
> SocketAddressLegacy. If we revive Max's work on implementing a default
> branch for a union discriminator
>
On 6/5/19 12:36 PM, Daniel P. Berrangé wrote:
>>
>> Ok.
>>
>> One more thing to discuss then. Should I add keepalive directly to
>> BlockdevOptionsNbd?
>>
>> Seems more useful to put it into SocketAddress, to be reused by other socket
>> users..
>> But "SocketAddress" sounds like address, not
On 2/8/19 12:21 PM, Max Reitz wrote:
> On 07.02.19 07:56, Markus Armbruster wrote:
>> Max Reitz writes:
>>
>>> This patch allows specifying a discriminator that is an optional member
>>> of the base struct. In such a case, a default value must be provided
>>> that is used when no value is given.
Patchew URL:
https://patchew.org/QEMU/20190604161514.262241-1-vsement...@virtuozzo.com/
Hi,
This series failed the asan build test. Please find the testing commands and
their output below. If you have Docker installed, you can probably reproduce it
locally.
=== TEST SCRIPT BEGIN ===
On Wed, Jun 05, 2019 at 05:28:05PM +, Vladimir Sementsov-Ogievskiy wrote:
> 05.06.2019 20:12, Eric Blake wrote:
> > On 6/5/19 12:05 PM, Vladimir Sementsov-Ogievskiy wrote:
> >
> >>> By enabling TCP keepalives we are explicitly making the connection
> >>> less reliable by forcing it to be
05.06.2019 20:12, Eric Blake wrote:
> On 6/5/19 12:05 PM, Vladimir Sementsov-Ogievskiy wrote:
>
>>> By enabling TCP keepalives we are explicitly making the connection
>>> less reliable by forcing it to be terminated when keepalive
>>> threshold triggers, instead of waiting longer for TCP to
05.06.2019 20:11, Kevin Wolf wrote:
> Am 05.06.2019 um 14:32 hat Vladimir Sementsov-Ogievskiy geschrieben:
>> child_role job already has .stay_at_node=true, so on bdrv_replace_node
>> operation these child are unchanged. Make block job blk behave in same
>> manner, to avoid inconsistent
On 6/5/19 12:05 PM, Vladimir Sementsov-Ogievskiy wrote:
>> By enabling TCP keepalives we are explicitly making the connection
>> less reliable by forcing it to be terminated when keepalive
>> threshold triggers, instead of waiting longer for TCP to recover.
>>
>> The rationale s that once a
Am 05.06.2019 um 14:32 hat Vladimir Sementsov-Ogievskiy geschrieben:
> child_role job already has .stay_at_node=true, so on bdrv_replace_node
> operation these child are unchanged. Make block job blk behave in same
> manner, to avoid inconsistent intermediate graph states and workarounds
> like in
05.06.2019 19:37, Daniel P. Berrangé wrote:
> On Wed, Jun 05, 2019 at 09:39:10AM -0500, Eric Blake wrote:
>> On 6/5/19 5:09 AM, Vladimir Sementsov-Ogievskiy wrote:
>>> Enable keepalive option to track server availablity.
>>
>> s/availablity/availability/
>>
>> Do we want this unconditionally, or
On Wed, Jun 05, 2019 at 09:39:10AM -0500, Eric Blake wrote:
> On 6/5/19 5:09 AM, Vladimir Sementsov-Ogievskiy wrote:
> > Enable keepalive option to track server availablity.
>
> s/availablity/availability/
>
> Do we want this unconditionally, or should it be an option (and hence
> exposed over
On Wed, Jun 05, 2019 at 07:18:03PM +0300, Vladimir Sementsov-Ogievskiy wrote:
> Signed-off-by: Vladimir Sementsov-Ogievskiy
> ---
> include/io/channel.h | 15 +++
> io/channel-socket.c | 20
> io/channel.c | 14 ++
> 3 files changed, 49
Enable keepalive option to track server availability.
Requested-by: Denis V. Lunev
Signed-off-by: Vladimir Sementsov-Ogievskiy
---
block/nbd-client.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/block/nbd-client.c b/block/nbd-client.c
index 790ecc1ee1..b57cea8482 100644
---
Signed-off-by: Vladimir Sementsov-Ogievskiy
---
include/io/channel.h | 15 +++
io/channel-socket.c | 20
io/channel.c | 14 ++
3 files changed, 49 insertions(+)
diff --git a/include/io/channel.h b/include/io/channel.h
index
Hi all!
Here is a suggestion to enable keepalive option to track server availablity.
We suggest to enable it by default. If we need, we'll be able to add option
to specify timeout by hand later.
v2: 01 - Fix io channel returned errors to be -1 [Daniel]
02 - Fix typo in commit message [Eric]
On 05.06.19 17:54, Vladimir Sementsov-Ogievskiy wrote:
> Test fails at least for qcow, because of different cluster sizes in
> base and top (and therefore different granularities of bitmaps we are
> trying to merge).
>
> The test aim is to check block-dirty-bitmap-merge between different
> nodes
Reliably ending the drain on a BDS's parents is quite difficult. What
we have to achieve is to undrain exactly those parents that have been
added to the BDS while its quiesce_counter was elevated. If we move
decrementing the quiesce_counter before the invocation of
bdrv_parent_drained_end(),
If a test has issued a quit command already (which may be useful to do
explicitly because the test wants to show its effects),
QEMUMachine.shutdown() should not do so again. Otherwise, the VM may
well return an ECONNRESET which will lead QEMUMachine.shutdown() to
killing it, which then turns into
We currently do not keep track of how many times a child has quiesced
its parent. We just guess based on the child’s quiesce_counter. That
keeps biting us when we try to leave drained sections or detach children
(see e.g. commit 5cb2737e925042e).
I think we need an explicit counter to keep
Commit 5cb2737e925042e6c7cd3fb0b01313950b03cddf laid out why
bdrv_do_drained_end() must decrement the quiesce_counter after
bdrv_drain_invoke(). It did not give a very good reason why it has to
happen after bdrv_parent_drained_end(), instead only claiming symmetry
to bdrv_do_drained_begin().
It
05.06.2019 19:02, Daniel P. Berrangé wrote:
> On Wed, Jun 05, 2019 at 01:09:12PM +0300, Vladimir Sementsov-Ogievskiy wrote:
>> Signed-off-by: Vladimir Sementsov-Ogievskiy
>> ---
>> include/io/channel.h | 13 +
>> io/channel-socket.c | 19 +++
>> io/channel.c
Before the previous patches, the first case resulted in a failed
assertion (which is noted as qemu receiving a SIGABRT in the test
output), and the second usually triggered a segmentation fault.
Signed-off-by: Max Reitz
---
tests/qemu-iotests/040 | 40 +-
On Wed, Jun 05, 2019 at 09:38:06AM -0500, Eric Blake wrote:
> On 6/5/19 5:09 AM, Vladimir Sementsov-Ogievskiy wrote:
> > Signed-off-by: Vladimir Sementsov-Ogievskiy
> > ---
> > include/io/channel.h | 13 +
> > io/channel-socket.c | 19 +++
> > io/channel.c |
On Wed, Jun 05, 2019 at 01:09:12PM +0300, Vladimir Sementsov-Ogievskiy wrote:
> Signed-off-by: Vladimir Sementsov-Ogievskiy
> ---
> include/io/channel.h | 13 +
> io/channel-socket.c | 19 +++
> io/channel.c | 14 ++
> 3 files changed, 46
On Sat, May 25, 2019 at 10:05:59AM +0100, Stefan Hajnoczi wrote:
> Now that liburing has pkg-config support, use it instead of hardcoding
> compiler flags in QEMU's build scripts. This way distros can customize
> the location of liburing's headers and libraries without requiring
> changes to
Test fails at least for qcow, because of different cluster sizes in
base and top (and therefore different granularities of bitmaps we are
trying to merge).
The test aim is to check block-dirty-bitmap-merge between different
nodes functionality, no needs to check all formats. So, let's just drop
05.06.2019 18:33, Max Reitz wrote:
> On 05.06.19 17:26, Vladimir Sementsov-Ogievskiy wrote:
>> Qcow default cluster size is 4k, but default format of created overlay
>> image on snapshot operation is qcow2 with it's default cluster of 64k.
>
> Then I wonder why we run this test even for anything
On 05.06.19 17:26, Vladimir Sementsov-Ogievskiy wrote:
> Qcow default cluster size is 4k, but default format of created overlay
> image on snapshot operation is qcow2 with it's default cluster of 64k.
Then I wonder why we run this test even for anything but qcow2.
I forgot to mention that this
Qcow default cluster size is 4k, but default format of created overlay
image on snapshot operation is qcow2 with it's default cluster of 64k.
This leads to block-dirty-bitmap-merge fail when test run for qcow
format, as it can't merge bitmaps with different granularities.
Let's fix it by
05.06.2019 17:51, Max Reitz wrote:
> On 17.05.19 17:21, Vladimir Sementsov-Ogievskiy wrote:
>> This test shows that external snapshots and incremental backups are
>> friends.
>>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy
>> ---
>> tests/qemu-iotests/254 | 52
On 17.05.19 17:21, Vladimir Sementsov-Ogievskiy wrote:
> This test shows that external snapshots and incremental backups are
> friends.
>
> Signed-off-by: Vladimir Sementsov-Ogievskiy
> ---
> tests/qemu-iotests/254 | 52 ++
> tests/qemu-iotests/254.out |
On 6/5/19 5:39 PM, Eric Blake wrote:
> On 6/5/19 5:09 AM, Vladimir Sementsov-Ogievskiy wrote:
>> Enable keepalive option to track server availablity.
> s/availablity/availability/
>
> Do we want this unconditionally, or should it be an option (and hence
> exposed over QMP)?
That is good question,
On 6/5/19 5:09 AM, Vladimir Sementsov-Ogievskiy wrote:
> Enable keepalive option to track server availablity.
s/availablity/availability/
Do we want this unconditionally, or should it be an option (and hence
exposed over QMP)?
>
> Requested-by: Denis V. Lunev
> Signed-off-by: Vladimir
On 6/5/19 5:09 AM, Vladimir Sementsov-Ogievskiy wrote:
> Signed-off-by: Vladimir Sementsov-Ogievskiy
> ---
> include/io/channel.h | 13 +
> io/channel-socket.c | 19 +++
> io/channel.c | 14 ++
> 3 files changed, 46 insertions(+)
Dan, if you'd
04.06.2019 0:46, John Snow wrote:
> Pygments and Sphinx get pickier all the time; Sphinx 2.1+ now catches
> these errors.
>
> Signed-off-by: John Snow
Reviewed-by: Vladimir Sementsov-Ogievskiy
> ---
> docs/interop/bitmaps.rst | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
>
John Snow writes:
> On 5/31/19 10:55 AM, Eric Blake wrote:
>> On 5/30/19 11:26 AM, John Snow wrote:
>>>
>>>
>>> On 5/30/19 10:39 AM, Vladimir Sementsov-Ogievskiy wrote:
Let's add a possibility to query dirty-bitmaps not only on root nodes.
It is useful when dealing both with snapshots
Add stay_at_node fields to BlockBackend and BdrvChild, for the same
behavior as stay_at_node field of BdrvChildRole. It will be used for
block-job blk.
Signed-off-by: Vladimir Sementsov-Ogievskiy
---
include/block/block_int.h | 6 ++
include/sysemu/block-backend.h | 2 ++
block.c
child_role job already has .stay_at_node=true, so on bdrv_replace_node
operation these child are unchanged. Make block job blk behave in same
manner, to avoid inconsistent intermediate graph states and workarounds
like in mirror.
Signed-off-by: Vladimir Sementsov-Ogievskiy
---
block/mirror.c |
Hi all.
Here is a proposal of replacing workaround in mirror, when
we have to move filter node back to block-job blk after
bdrv_replace_node.
v2: rebased on updated blk_new, with aio context paramter.
Vladimir Sementsov-Ogievskiy (2):
block: introduce pinned blk
blockjob: use
Until ESXi 6.5 VMware used the vmfsSparse format for snapshots (VMDK3 in
QEMU).
This format was lacking in the following:
* Grain directory (L1) and grain table (L2) entries were 32-bit,
allowing access to only 2TB (slightly less) of data.
* The grain size (default) was 512 bytes -
512M of L1 entries is a very loose bound, only 32M are required to store
the maximal supported VMDK file size of 2TB.
Fixed qemu-iotest 59# - now failure occures before on impossible L1
table size.
Reviewed-by: Karl Heubaum
Reviewed-by: Eyal Moscovici
Reviewed-by: Liran Alon
Reviewed-by:
v1:
VMware introduced a new snapshot format in VMFS6 - seSparse (Space
Efficient Sparse) which is the default format available in ESXi 6.7.
Add read-only support for the new snapshot format.
v2:
Fixed after Max's review:
* Removed strict sesparse checks
* Reduced maximal L1 table size
* Added
Commit b0651b8c246d ("vmdk: Move l1_size check into vmdk_add_extent")
extended the l1_size check from VMDK4 to VMDK3 but did not update the
default coverage in the moved comment.
The previous vmdk4 calculation:
(512 * 1024 * 1024) * 512(l2 entries) * 65536(grain) = 16PB
The added vmdk3
04.06.2019 19:15, Vladimir Sementsov-Ogievskiy wrote:
> Introduce new initialization API, to create requests with padding. Will
> be used in the following patch. New API uses qemu_iovec_init_buf if
> resulting io vector has only one element, to avoid extra allocations.
> So, we need to update
Hi all!
Here is a suggestion to enable keepalive option to track server availablity.
Vladimir Sementsov-Ogievskiy (2):
io/channel: add qio_channel_set_keepalive
nbd-client: enable TCP keepalive
include/io/channel.h | 13 +
block/nbd-client.c | 1 +
io/channel-socket.c | 19
Enable keepalive option to track server availablity.
Requested-by: Denis V. Lunev
Signed-off-by: Vladimir Sementsov-Ogievskiy
---
block/nbd-client.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/block/nbd-client.c b/block/nbd-client.c
index 790ecc1ee1..b57cea8482 100644
---
Signed-off-by: Vladimir Sementsov-Ogievskiy
---
include/io/channel.h | 13 +
io/channel-socket.c | 19 +++
io/channel.c | 14 ++
3 files changed, 46 insertions(+)
diff --git a/include/io/channel.h b/include/io/channel.h
index
Am 04.06.2019 um 19:06 hat Heitke, Kenneth geschrieben:
>
>
> On 6/4/2019 3:13 AM, Klaus Birkelund wrote:
> > On Tue, Jun 04, 2019 at 10:46:45AM +0200, Kevin Wolf wrote:
> > > Am 04.06.2019 um 10:28 hat Klaus Birkelund geschrieben:
> > > > On Mon, Jun 03, 2019 at 09:30:53AM -0600, Heitke,
On Mon, 2019-06-03 at 18:25 -0400, John Snow wrote:
>
> On 4/17/19 3:53 PM, Maxim Levitsky wrote:
> > Phase bits are only set by the hardware to indicate new completions
> > and not by the device driver.
> >
> > Signed-off-by: Maxim Levitsky
> > ---
> > block/nvme.c | 2 --
> > 1 file changed,
53 matches
Mail list logo