On 7/7/21 6:14 PM, Stefan Hajnoczi wrote:
On Wed, Jul 07, 2021 at 12:43:56PM +0200, Hannes Reinecke wrote:
On 7/7/21 11:53 AM, Klaus Jensen wrote:
On Jul 7 09:49, Hannes Reinecke wrote:
On 7/6/21 11:33 AM, Klaus Jensen wrote:
From: Klaus Jensen
Prior to this patch the nvme-ns devices are a
On Jul 7 18:56, Klaus Jensen wrote:
On Jul 7 17:57, Hannes Reinecke wrote:
On 7/7/21 5:49 PM, Klaus Jensen wrote:
From: Klaus Jensen
Prior to this patch the nvme-ns devices are always children of the
NvmeBus owned by the NvmeCtrl. This causes the namespaces to be
unrealized when the parent
Enhance the test to demonstrate behavior of qemu-img with a qcow2
image containing an inconsistent bitmap, and rename it now that we
support useful iotest names.
While at it, fix a missing newline in the error message thus exposed.
Signed-off-by: Eric Blake
---
block/dirty-bitmap.c
The point of 'qemu-img convert --bitmaps' is to be a convenience for
actions that are already possible through a string of smaller
'qemu-img bitmap' sub-commands. One situation not accounted for
already is that if a source image contains an inconsistent bitmap (for
example, because a qemu process
This is mostly a convenience factor as one could already use 'qemu-img
info' to learn which bitmaps are broken and then 'qemu-img bitmap
--remove' to nuke them before calling 'qemu-img convert --bitmaps',
but it does have the advantage that the copied file is usable without
extra efforts and the br
On 7/7/21 11:04 AM, Peter Lieven wrote:
> task->complete is a bool not an integer.
>
> Signed-off-by: Peter Lieven
> ---
> block/rbd.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/block/rbd.c b/block/rbd.c
> index 01a7b94d62..dcf82b15b8 100644
> --- a/block/rbd.c
> +
On Mon, May 03, 2021 at 04:35:58PM -0500, Eric Blake wrote:
> We've gone enough release cycles without noticeable pushback on our
> intentions, so time to make it harder to create images that can form a
> security hole due to a need for format probing rather than an explicit
> format.
>
> Eric Bla
On Mon, Jul 05, 2021 at 11:07:21AM +0300, Vladimir Sementsov-Ogievskiy wrote:
> 04.07.2021 01:07, Lukas Straub wrote:
> > Although unlikely, qemu might hang in nbd_send_request().
> >
> > Allow recovery in this case by registering the yank function before
> > calling it.
> >
> > Signed-off-by: Lu
On Wed, Jul 7, 2021 at 9:41 PM Eric Blake wrote:
>
> Reword the paragraphs to list the JSON key first, rather than in the
> middle of prose.
>
> Suggested-by: Vladimir Sementsov-Ogievskiy
> Signed-off-by: Eric Blake
> ---
> docs/tools/qemu-img.rst | 20 ++--
> 1 file changed, 10
Reword the paragraphs to list the JSON key first, rather than in the
middle of prose.
Suggested-by: Vladimir Sementsov-Ogievskiy
Signed-off-by: Eric Blake
---
docs/tools/qemu-img.rst | 20 ++--
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/docs/tools/qemu-img.r
Remove the workaround introduced in commit
6ecbc6c52672db5c13805735ca02784879ce8285
"replication: Avoid blk_make_empty() on read-only child".
It is not needed anymore since s->hidden_disk is guaranteed to be
writable when secondary_do_checkpoint() runs. Because replication_start(),
_do_checkpoint(
The replication driver needs access to the children block-nodes of
it's child so it can issue bdrv_make_empty() and bdrv_co_pwritev()
to manage the replication. However, it does this by directly copying
the BdrvChilds, which is wrong.
Fix this by properly attaching the block-nodes with
bdrv_attach
In preparation for the next patch, initialize s->hidden_disk and
s->secondary_disk later and replace access to them with local variables
in the places where they aren't initialized yet.
Signed-off-by: Lukas Straub
---
block/replication.c | 45 -
1 file
s->active_disk is bs->file. Remove it and use local variables instead.
Signed-off-by: Lukas Straub
---
block/replication.c | 38 +-
1 file changed, 21 insertions(+), 17 deletions(-)
diff --git a/block/replication.c b/block/replication.c
index 52163f2d1f..5094
Hello Everyone,
A while ago Kevin noticed that the replication driver doesn't properly attach
the children it wants to use. Instead, it directly copies the BdrvChilds from
it's backing file, which is wrong. Ths Patchset fixes the problem and removes
the workaround that was put in place back then.
> Am 06.07.2021 um 17:25 schrieb Kevin Wolf :
>
> Am 06.07.2021 um 16:55 hat Peter Lieven geschrieben:
>>> Am 06.07.2021 um 15:19 schrieb Kevin Wolf :
>>>
>>> Am 02.07.2021 um 19:23 hat Ilya Dryomov geschrieben:
This series migrates the qemu rbd driver from the old aio emulation
to
adding myself as a designated reviewer.
Signed-off-by: Peter Lieven
---
MAINTAINERS | 1 +
1 file changed, 1 insertion(+)
diff --git a/MAINTAINERS b/MAINTAINERS
index 516db737d1..cfda57e825 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -3058,6 +3058,7 @@ F: block/vmdk.c
RBD
M: Ilya Dryomov
task->complete is a bool not an integer.
Signed-off-by: Peter Lieven
---
block/rbd.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/block/rbd.c b/block/rbd.c
index 01a7b94d62..dcf82b15b8 100644
--- a/block/rbd.c
+++ b/block/rbd.c
@@ -1066,7 +1066,7 @@ static int qemu_rbd_res
Now that we use the job_mutex, remove unnecessary aio_context_acquire/release
pairs. However, some place still needs it, so try to reduce the
aio_context critical section to the minimum.
This patch is separated from the one before because here we are removing
locks without substituting it with aio
Using getters/setters we can have a more strict control on struct Job
fields. The struct remains public, because it is also used as base
class for BlockJobs and various, but replace all direct accesses
to the fields we want to protect with getters/setters.
This is in preparation to the locking patc
Create _locked functions, to make next patch a little bit smaller.
Also set the locking functions as public, so that they can be used
also from structures using the Job struct.
Signed-off-by: Emanuele Giuseppe Esposito
---
include/qemu/job.h | 23 +
job.c | 85 ++
This lock is going to replace most of the AioContext locks
in the job and blockjob, so that a Job can run in an arbitrary
AioContext.
Signed-off-by: Emanuele Giuseppe Esposito
---
include/block/blockjob_int.h | 1 +
include/qemu/job.h | 2 +
block/backup.c | 4 +
bl
Check for NULL id to job_get, so that in the next patch we can
move job_get inside a single critical section of job_create.
Also add missing notifier_list_init for the on_idle NotifierList,
which seems to have been forgot.
Signed-off-by: Emanuele Giuseppe Esposito
---
job.c | 16 ---
This makes it easier to understand what needs to be protected
by a lock and what doesn't.
Signed-off-by: Emanuele Giuseppe Esposito
---
include/qemu/job.h | 101 -
1 file changed, 82 insertions(+), 19 deletions(-)
diff --git a/include/qemu/job.h b/inc
This is a continuation on the work to reduce (and possibly get rid of) the
usage of AioContext lock, by introducing smaller granularity locks to keep the
thread safety.
This series aims to:
1) remove the aiocontext lock and substitute it with the already existing
global job_mutex
2) fix what
On Jul 7 17:57, Hannes Reinecke wrote:
On 7/7/21 5:49 PM, Klaus Jensen wrote:
From: Klaus Jensen
Prior to this patch the nvme-ns devices are always children of the
NvmeBus owned by the NvmeCtrl. This causes the namespaces to be
unrealized when the parent device is removed. However, when subsy
On Jul 7 17:14, Stefan Hajnoczi wrote:
On Wed, Jul 07, 2021 at 12:43:56PM +0200, Hannes Reinecke wrote:
On 7/7/21 11:53 AM, Klaus Jensen wrote:
> On Jul 7 09:49, Hannes Reinecke wrote:
> > On 7/6/21 11:33 AM, Klaus Jensen wrote:
> > > From: Klaus Jensen
> > >
> > > Prior to this patch the nvm
07.07.2021 17:53, Lukas Straub wrote:
Hi,
Thanks for your review. More below.
Btw: There is a overview of the replication design in
docs/block-replication.txt
On Wed, 7 Jul 2021 16:01:31 +0300
Vladimir Sementsov-Ogievskiy wrote:
06.07.2021 19:11, Lukas Straub wrote:
The replication driver n
On Wed, Jul 07, 2021 at 12:43:56PM +0200, Hannes Reinecke wrote:
> On 7/7/21 11:53 AM, Klaus Jensen wrote:
> > On Jul 7 09:49, Hannes Reinecke wrote:
> > > On 7/6/21 11:33 AM, Klaus Jensen wrote:
> > > > From: Klaus Jensen
> > > >
> > > > Prior to this patch the nvme-ns devices are always childr
On 7/7/21 5:49 PM, Klaus Jensen wrote:
From: Klaus Jensen
Prior to this patch the nvme-ns devices are always children of the
NvmeBus owned by the NvmeCtrl. This causes the namespaces to be
unrealized when the parent device is removed. However, when subsystems
are involved, this is not what we w
From: Klaus Jensen
Prior to this patch the nvme-ns devices are always children of the
NvmeBus owned by the NvmeCtrl. This causes the namespaces to be
unrealized when the parent device is removed. However, when subsystems
are involved, this is not what we want since the namespaces may be
attached
From: Klaus Jensen
Make sure the controller is unregistered from the subsystem when device
is removed.
Reviewed-by: Hannes Reinecke
Signed-off-by: Klaus Jensen
---
hw/nvme/nvme.h | 1 +
hw/nvme/ctrl.c | 4
hw/nvme/subsys.c | 5 +
3 files changed, 10 insertions(+)
diff --git a/hw
From: Klaus Jensen
We currently lack the infrastructure to handle subsystem hotplugging, so
disable it.
Reviewed-by: Hannes Reinecke
Signed-off-by: Klaus Jensen
---
hw/nvme/subsys.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/hw/nvme/subsys.c b/hw/nvme/subsys.c
index 192223d17ca1..dc7
From: Klaus Jensen
The nvme_ns_setup and nvme_ns_check_constraints should not depend on the
controller state. Refactor and remove it.
Reviewed-by: Hannes Reinecke
Signed-off-by: Klaus Jensen
---
hw/nvme/nvme.h | 2 +-
hw/nvme/ctrl.c | 2 +-
hw/nvme/ns.c | 37 ++
From: Klaus Jensen
Back in May, Hannes posted a fix[1] to re-enable NVMe PCI hotplug. We
discussed a bit back and fourth and I mentioned that the core issue was
an artifact of the parent/child relationship stemming from the qdev
setup we have with namespaces attaching to controller through a qdev
On Jul 7 12:43, Hannes Reinecke wrote:
On 7/7/21 11:53 AM, Klaus Jensen wrote:
On Jul 7 09:49, Hannes Reinecke wrote:
On 7/6/21 11:33 AM, Klaus Jensen wrote:
From: Klaus Jensen
Prior to this patch the nvme-ns devices are always children of the
NvmeBus owned by the NvmeCtrl. This causes the
From: Greg Kurz
The device model batching its ioeventfds in a single MR transaction is
an optimization. Clarify this in virtio-scsi, virtio-blk and generic
virtio code. Also clarify that the transaction must commit before
closing ioeventfds so that no one is tempted to merge the loops
in the star
On Sat, Jul 03, 2021 at 10:25:28AM +0300, Vladimir Sementsov-Ogievskiy wrote:
...
> > An obvious solution is to make 'qemu-img map --output=json' add an
> > additional "present":false designation to any cluster lacking an
> > allocation anywhere in the chain, without any change to the "depth"
> > p
When there are multiple queues attached to the same AIO context,
some requests may experience high latency, since in the worst case
the AIO engine queue is only flushed when it is full (MAX_EVENTS) or
there are no more queues plugged.
Commit 2558cb8dd4 ("linux-aio: increasing MAX_EVENTS to a large
Changes in preparation for next patches where we add a new
parameter not related to the poll mechanism.
Let's add two new generic functions (iothread_set_param and
iothread_get_param) that we use to set and get IOThread
parameters.
Signed-off-by: Stefano Garzarella
---
iothread.c | 27 +
The `aio-max-batch` parameter will be propagated to AIO engines
and it will be used to control the maximum number of queued requests.
When there are in queue a number of requests equal to `aio-max-batch`,
the engine invokes the system call to forward the requests to the kernel.
This parameter all
This series add a new `aio-max-batch` parameter to IOThread, and use it in the
Linux AIO backend to limit the batch size (number of request submitted to the
kernel through io_submit(2)).
Commit 2558cb8dd4 ("linux-aio: increasing MAX_EVENTS to a larger hardcoded
value") changed MAX_EVENTS from 128
Hi,
Thanks for your review. More below.
Btw: There is a overview of the replication design in
docs/block-replication.txt
On Wed, 7 Jul 2021 16:01:31 +0300
Vladimir Sementsov-Ogievskiy wrote:
> 06.07.2021 19:11, Lukas Straub wrote:
> > The replication driver needs access to the children block-no
On Mon, Jul 05 2021, Kevin Wolf wrote:
> dev->max_queues was never initialised for backends that don't support
> VHOST_USER_PROTOCOL_F_MQ, so it would use 0 as the maximum number of
> queues to check against and consequently fail for any such backend.
>
> Set it to 1 if the backend doesn't have m
Forgotten thing :(
Kevin, could you please queue it in your block branch? For me not to bother
Peter with one-patch pull request.
08.06.2021 20:18, Vladimir Sementsov-Ogievskiy wrote:
drive_backup_prepare() does bdrv_drained_begin() in hope that
bdrv_drained_end() will be called in drive_backu
06.07.2021 19:11, Lukas Straub wrote:
The replication driver needs access to the children block-nodes of
it's child so it can issue bdrv_make_empty to manage the replication.
However, it does this by directly copying the BdrvChilds, which is
wrong.
Fix this by properly attaching the block-nodes
Am 07.07.2021 um 10:50 hat Or Ozeri geschrieben:
> Would you suggest to do this child traversal on bdrv_query_image_info, and
> have
> it returned as part of the ImageInfo struct?
> In that case, I would add *driver-specific to ImageInfo, in addition to the
> existing *format-specific?
No, extend
Am 25.06.2021 um 16:23 hat Max Reitz geschrieben:
> Max Reitz (6):
> export/fuse: Pass default_permissions for mount
> export/fuse: Add allow-other option
> export/fuse: Give SET_ATTR_SIZE its own branch
> export/fuse: Let permissions be adjustable
> iotests/308: Test +w on read-only FUSE
On 7/7/21 11:53 AM, Klaus Jensen wrote:
On Jul 7 09:49, Hannes Reinecke wrote:
On 7/6/21 11:33 AM, Klaus Jensen wrote:
From: Klaus Jensen
Prior to this patch the nvme-ns devices are always children of the
NvmeBus owned by the NvmeCtrl. This causes the namespaces to be
unrealized when the par
Am 25.06.2021 um 16:23 hat Max Reitz geschrieben:
> Signed-off-by: Max Reitz
> ---
> tests/qemu-iotests/tests/fuse-allow-other | 175 ++
> tests/qemu-iotests/tests/fuse-allow-other.out | 88 +
> 2 files changed, 263 insertions(+)
> create mode 100755 tests/qemu-iotes
Am 25.06.2021 um 16:23 hat Max Reitz geschrieben:
> Without the allow_other mount option, no user (not even root) but the
> one who started qemu/the storage daemon can access the export. Allow
> users to configure the export such that such accesses are possible.
>
> While allow_other is probably
On Jul 6 11:33, Klaus Jensen wrote:
From: Klaus Jensen
Prior to this patch the nvme-ns devices are always children of the
NvmeBus owned by the NvmeCtrl. This causes the namespaces to be
unrealized when the parent device is removed. However, when subsystems
are involved, this is not what we wan
On Jul 7 09:49, Hannes Reinecke wrote:
On 7/6/21 11:33 AM, Klaus Jensen wrote:
From: Klaus Jensen
Prior to this patch the nvme-ns devices are always children of the
NvmeBus owned by the NvmeCtrl. This causes the namespaces to be
unrealized when the parent device is removed. However, when subs
On Tue, Jul 06, 2021 at 10:37:03AM +0200, Philippe Mathieu-Daudé wrote:
> Stefan, IIRC the multi-process conclusion was we have to reject
> PCI devices briding another (non-PCI) bus, such ISA / I2C / USB
> / SD / ... because QEMU register the bus type globally and the
> command line machinery resol
Would you suggest to do this child traversal on bdrv_query_image_info, and have it returned as part of the ImageInfo struct?In that case, I would add *driver-specific to ImageInfo, in addition to the existing *format-specific?Or should I just do the traversal in img_info (qemu-img.c), avoiding the
Am 07.07.2021 um 07:35 hat Or Ozeri geschrieben:
> When using the raw format, allow exposing specific info by the underlying
> storage.
> In particular, this will enable RBD images using the raw format to indicate
> a LUKS2 encrypted image in the output of qemu-img info.
>
> Signed-off-by: Or Oze
On 7/6/21 11:33 AM, Klaus Jensen wrote:
From: Klaus Jensen
Prior to this patch the nvme-ns devices are always children of the
NvmeBus owned by the NvmeCtrl. This causes the namespaces to be
unrealized when the parent device is removed. However, when subsystems
are involved, this is not what we
On 7/6/21 11:33 AM, Klaus Jensen wrote:
From: Klaus Jensen
Make sure the controller is unregistered from the subsystem when device
is removed.
Signed-off-by: Klaus Jensen
---
hw/nvme/nvme.h | 1 +
hw/nvme/ctrl.c | 4
hw/nvme/subsys.c | 5 +
3 files changed, 10 insertions(+)
On 7/6/21 11:33 AM, Klaus Jensen wrote:
From: Klaus Jensen
We currently lack the infrastructure to handle subsystem hotplugging, so
disable it.
Signed-off-by: Klaus Jensen
---
hw/nvme/subsys.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/hw/nvme/subsys.c b/hw/nvme/subsys.c
index 192
On 7/6/21 11:33 AM, Klaus Jensen wrote:
From: Klaus Jensen
The nvme_ns_setup and nvme_ns_check_constraints should not depend on the
controller state. Refactor and remove it.
Signed-off-by: Klaus Jensen
---
hw/nvme/nvme.h | 2 +-
hw/nvme/ctrl.c | 2 +-
hw/nvme/ns.c | 37 +++
60 matches
Mail list logo