On Thu 25 Feb 2016 11:53:54 PM CET, Paolo Bonzini wrote:
> Not particularly important since qemu-img exits immediately after
> calling img_rebase, but easily fixed. Coverity says thanks.
>
> Signed-off-by: Paolo Bonzini
Reviewed-by: Alberto Garcia
Berto
On Thu, 02/25 15:58, John Snow wrote:
> During incremental backups, if the target has a cluster size that is
> larger than the backup cluster size and we are backing up to a target
> that cannot (for whichever reason) pull clusters up from a backing image,
> we may inadvertantly create unusable inc
Simple unions were carrying a special case that hid their 'data'
QMP member from the resulting C struct, via the hack method
QAPISchemaObjectTypeVariant.simple_union_type(). But using the
work we started by unboxing flat union and alternate branches, we
expose the simple union's implicit type in q
qapi code generators currently create a 'void *data' member as
part of the anonymous union embedded in the C struct corresponding
to a qapi union. However, directly assigning to this member of
the union feels a bit fishy, when we can directly use the rest
of the struct instead.
Signed-off-by: Eri
An upcoming patch will alter how simple unions, like SocketAddress,
are laid out, which will impact all lines of the form 'addr->u.XXX'.
To minimize the impact of that patch, use C99 initialization or a
temporary variable to reduce the number of lines needing modification
when an internal reference
Not particularly important since qemu-img exits immediately after
calling img_rebase, but easily fixed. Coverity says thanks.
Signed-off-by: Paolo Bonzini
---
qemu-img.c | 9 -
1 file changed, 4 insertions(+), 5 deletions(-)
diff --git a/qemu-img.c b/qemu-img.c
index 2edb139..3103150 1
Backups sometimes need a non-64KiB transfer cluster size.
See patch #2 for the detailed justificaton.
===
v4:
===
02: Polished the error message.
===
v3:
===
01: +R-B
02: Added failure mode for bdrv_get_info, including critical failure for when
we don't have a backing file but couldn't retr
If a backing file isn't specified in the target image and the
cluster_size is larger than the bitmap granularity, we run the risk of
creating bitmaps with allocated clusters but empty/no data which will
prevent the proper reading of the backup in the future.
Signed-off-by: John Snow
Reviewed-by:
64K might not always be appropriate, make this a runtime value.
Signed-off-by: John Snow
Reviewed-by: Fam Zheng
---
block/backup.c | 64 +-
1 file changed, 36 insertions(+), 28 deletions(-)
diff --git a/block/backup.c b/block/backup.c
ind
During incremental backups, if the target has a cluster size that is
larger than the backup cluster size and we are backing up to a target
that cannot (for whichever reason) pull clusters up from a backing image,
we may inadvertantly create unusable incremental backup images.
For example:
If the
On 02/25/2016 02:49 AM, Stefan Priebe - Profihost AG wrote:
>
> Am 22.02.2016 um 23:08 schrieb John Snow:
>>
>>
>> On 02/22/2016 03:21 PM, Stefan Priebe wrote:
>>> Hello,
>>>
>>> is there any chance or hack to work with a bigger cluster size for the
>>> drive backup job?
>>>
>>> See:
>>> http://
When QEMU creates a VHD image, it goes by the original spec,
calculating the current_size based on the nearest CHS geometry (with an
exception for disks > 127GB).
Apparently, Azure will only allow images that are sized to the nearest
MB, and the current_size as calculated from CHS cannot guarantee
Changes from v3:
Patch 2: Added a sample image & tests for Disk2vhd
Patch 3: When using force_size, set the CHS geometry to max
(as Disk2vhd does), to maximize backward compatibility (thanks Peter)
Patch 4: Updated test results due to the above changes in patch 3
Changes from v2:
Patche
This tests auto-detection, and overrides, of VHD image sizes created
by Virtual PC, Hyper-V, and Disk2vhd.
This adds three sample images:
hyperv2012r2-dynamic.vhd.bz2 - dynamic VHD image created with Hyper-V
virtualpc-dynamic.vhd.bz2- dynamic VHD image created with Virtual PC
d2v-zerofilled.v
The VHD file format is used by both Virtual PC, and Hyper-V. However,
how the virtual disk size is calculated varies between the two.
Virtual PC uses the CHS drive parameters to determine the drive size.
Hyper-V, on the other hand, uses the current_size field in the footer
when determining image
Signed-off-by: Jeff Cody
---
tests/qemu-iotests/146 | 51 ++
tests/qemu-iotests/146.out | 32 +
2 files changed, 83 insertions(+)
diff --git a/tests/qemu-iotests/146 b/tests/qemu-iotests/146
index 4cbe1f4..043711b 100755
On 02/25/2016 06:18 AM, Alberto Garcia wrote:
> On Tue 23 Feb 2016 06:16:38 PM CET, Kevin Wolf wrote:
>
>> However, libvirt can't use blockdev-add as long as it is still
>> experimental, and we're expecting that it will still take some time,
>> so we need to resort to drive_add.
>
> But how stabl
On Tue 23 Feb 2016 06:16:38 PM CET, Kevin Wolf wrote:
> However, libvirt can't use blockdev-add as long as it is still
> experimental, and we're expecting that it will still take some time,
> so we need to resort to drive_add.
But how stable is the HMP API supposed to be?
Berto
On Wed, Feb 24, 2016 at 18:54:45 +0100, Max Reitz wrote:
> On 23.02.2016 18:16, Kevin Wolf wrote:
> > Now that we can use drive_add to create new nodes without a BB, we also
> > want to be able to delete such nodes again.
> >
> > Signed-off-by: Kevin Wolf
> > ---
> > blockdev.c | 9 +
> >
From: Paolo Bonzini
Signed-off-by: Paolo Bonzini
Reviewed-by: Michael S. Tsirkin
Signed-off-by: Michael S. Tsirkin
Reviewed-by: Fam Zheng
Acked-by: Stefan Hajnoczi
---
hw/block/dataplane/virtio-blk.h | 1 +
include/hw/virtio/virtio-blk.h | 3 --
hw/block/dataplane/virtio-blk.c | 112 ++
From: Paolo Bonzini
In disabled mode, virtio-blk dataplane seems to be enabled, but flow
actually goes through the normal virtio path. This patch simplifies a bit
the handling of disabled mode. In disabled mode, virtio_blk_handle_output
might be called even if s->dataplane is not NULL.
This is
From: Paolo Bonzini
Make the API more similar to the regular virtqueue API. This will
help when modifying the code to not use vring.c anymore.
Signed-off-by: Paolo Bonzini
Reviewed-by: Michael S. Tsirkin
Signed-off-by: Michael S. Tsirkin
Acked-by: Cornelia Huck
Reviewed-by: Fam Zheng
Acked
From: Paolo Bonzini
This is needed because dataplane will run during block migration as well.
The block device migration code is quite liberal in taking the iothread
mutex. For simplicity, keep it the same way, even though one could
actually choose between the BQL (for regular BlockDriverStates
On Wed 24 Feb 2016 05:15:11 PM CET, Max Reitz wrote:
>> When x-blockdev-del is performed on a BlockBackend that has inserted
>> media it will only succeed if the BDS doesn't have any additional
>> references.
>>
>> The only problem with this is that if the BDS was created separately
>> using bloc
From: Paolo Bonzini
In disabled mode, virtio-blk dataplane seems to be enabled, but flow
actually goes through the normal virtio path. This patch simplifies a bit
the handling of disabled mode. In disabled mode, virtio_blk_handle_output
might be called even if s->dataplane is not NULL.
This is
From: Paolo Bonzini
Make the API more similar to the regular virtqueue API. This will
help when modifying the code to not use vring.c anymore.
Signed-off-by: Paolo Bonzini
Reviewed-by: Michael S. Tsirkin
Signed-off-by: Michael S. Tsirkin
Acked-by: Cornelia Huck
Reviewed-by: Fam Zheng
Acked
From: Paolo Bonzini
Signed-off-by: Paolo Bonzini
Reviewed-by: Michael S. Tsirkin
Signed-off-by: Michael S. Tsirkin
Reviewed-by: Fam Zheng
Acked-by: Stefan Hajnoczi
---
hw/block/dataplane/virtio-blk.h | 1 +
include/hw/virtio/virtio-blk.h | 3 --
hw/block/dataplane/virtio-blk.c | 112 ++
From: Paolo Bonzini
This is needed because dataplane will run during block migration as well.
The block device migration code is quite liberal in taking the iothread
mutex. For simplicity, keep it the same way, even though one could
actually choose between the BQL (for regular BlockDriverStates
On Thu, 02/25 08:49, Stefan Priebe - Profihost AG wrote:
>
> Am 22.02.2016 um 23:08 schrieb John Snow:
> >
> >
> > On 02/22/2016 03:21 PM, Stefan Priebe wrote:
> >> Hello,
> >>
> >> is there any chance or hack to work with a bigger cluster size for the
> >> drive backup job?
> >>
> >> See:
> >>
29 matches
Mail list logo