On Tue, 06/14 15:30, Eric Blake wrote:
> It makes more sense to have ALL block size limit constraints
> in the same struct. Improve the documentation while at it.
>
> Signed-off-by: Eric Blake
Reviewed-by: Fam Zheng
On Tue, 06/14 15:30, Eric Blake wrote:
> The raw block driver was blindly copying all limits from bs->file,
> even though: 1. the main bdrv_refresh_limits() already does this
> for many of gthe limits, and 2. blindly copying from the children
s/gthe/the ?
> can weaken any stricter limits that
On Tue, 06/14 15:30, Eric Blake wrote:
> Sector-based limits are awkward to think about; in our on-going
> quest to move to byte-based interfaces, convert max_discard and
> discard_alignment. Rename them, using 'pdiscard' as an aid to
> track which remaining discard interfaces need conversion,
On Tue, 06/14 15:30, Eric Blake wrote:
> Sector-based limits are awkward to think about; in our on-going
> quest to move to byte-based interfaces, convert max_transfer_length
> and opt_transfer_length. Rename them (dropping the _length suffix)
> so that the compiler will help us catch the change
On Tue, 06/14 15:30, Eric Blake wrote:
> We want to eventually stick request_alignment alongside other
> BlockLimits, but first, we must ensure it is populated at the
> same time as all other limits, rather than being a special case
> that is set only when a block is first opened.
>
> Add a
On Tue, 06/14 15:30, Eric Blake wrote:
> We want to eventually stick request_alignment alongside other
> BlockLimits, but first, we must ensure it is populated at the
> same time as all other limits, rather than being a special case
> that is set only when a block is first opened.
>
> Now that
On Tue, 06/14 15:30, Eric Blake wrote:
> We want to eventually stick request_alignment alongside other
> BlockLimits, but first, we must ensure it is populated at the
> same time as all other limits, rather than being a special case
> that is set only when a block is first opened.
>
>
On Tue, 06/14 15:30, Eric Blake wrote:
> We want to eventually stick request_alignment alongside other
> BlockLimits, but first, we must ensure it is populated at the
> same time as all other limits, rather than being a special case
> that is set only when a block is first opened.
>
> In this
On Tue, 06/14 15:30, Eric Blake wrote:
> We want to eventually stick request_alignment alongside other
> BlockLimits, but first, we must ensure it is populated at the
> same time as all other limits, rather than being a special case
> that is set only when a block is first opened.
>
>
On Tue, 06/14 15:30, Eric Blake wrote:
> Making all callers special-case 0 as unlimited is awkward,
> and we DO have a hard maximum of BDRV_REQUEST_MAX_SECTORS given
> our current block layer API limits.
>
> In the case of scsi, this means that we now always advertise a
> limit to the guest, even
On Tue, 06/14 15:30, Eric Blake wrote:
> The function sector_limits_lun2qemu() returns a value in units of
> the block layer's 512-byte sector, and can be as large as
> 0x4000, which is much larger than the block layer's inherent
> limit of BDRV_REQUEST_MAX_SECTORS. The block layer already
>
On Tue, 06/14 15:30, Eric Blake wrote:
> We were basing the advertisement of maximum discard and transfer
> length off of UINT32_MAX, but since the rest of the block layer
> has signed int limits on a transaction, nothing could ever reach
> that maximum, and we risk overflowing an int once things
On Tue, 06/14 15:30, Eric Blake wrote:
> The NBD layer was breaking up request at a limit of 2040 sectors
> (just under 1M) to cater to old qemu-nbd. But the server limit
> was raised to 32M in commit 2d8214885 to match the kernel, more
> than three years ago; and the upstream NBD Protocol is
On Tue, 06/14 15:30, Eric Blake wrote:
> If the amount of data to read ends exactly on the total size
> of the bs, then we were wasting time creating a local qiov
> to read the data in preparation for what would normally be
> appending zeroes beyond the end, even though this corner case
> has
On Tue, 06/14 15:30, Eric Blake wrote:
> We don't pass any flags on to drivers to handle. Tighten an
> assert to explain why we pass 0 to bdrv_driver_preadv(), and add
> some comments on things to be aware of if we want to turn on
> per-BDS BDRV_REQ_FUA support during reads in the future. Also,
On Tue, 06/14 15:30, Eric Blake wrote:
> For symmetry with bdrv_aligned_preadv(), assert that the caller
> really has aligned things properly. This requires adding an align
> parameter, which is used now only in the new asserts, but will
> come in handy in a later patch that adds
On Wed, 06/15 14:40, Colin Lord wrote:
> From: Marc Mari
>
> To simplify the addition of new block modules, add a script that generates
> include/qemu/module_block.h automatically from the modules' source code.
>
> This script assumes that the QEMU coding style rules are
On 15/06/2016 20:40, Colin Lord wrote:
>
> The only block drivers that can be converted into modules are the drivers
> that don't perform any init operation except for registering themselves. This
> is why libiscsi has been disabled as a module.
I don't think it has in this patch :) but you
On 15/06/2016 20:40, Colin Lord wrote:
> +def add_module(fhader, library, format_name, protocol_name,
fhader looks like a typo.
Paolo
> +probe, probe_device):
> +lines = []
> +lines.append('.library_name = "' + library + '",')
> +if format_name != "":
> +
On 15/06/2016 23:16, Cédric Le Goater wrote:
> This enables qemu to handle late inits and report errors. All the SSI
> slave routine names were changed accordingly. Code was modified to
> handle errors when possible (m25p80 and ssi-sd)
>
> Tested with the m25p80 slave object.
>
> Suggested-by:
This enables qemu to handle late inits and report errors. All the SSI
slave routine names were changed accordingly. Code was modified to
handle errors when possible (m25p80 and ssi-sd)
Tested with the m25p80 slave object.
Suggested-by: Paolo Bonzini
Signed-off-by: Cédric Le
From: Marc Mari
Extend the current module interface to allow for block drivers to be loaded
dynamically on request.
The only block drivers that can be converted into modules are the drivers
that don't perform any init operation except for registering themselves. This
is why
On 15/06/2016 17:44, Cédric Le Goater wrote:
> s->sd = sd_init(dinfo ? blk_by_legacy_dinfo(dinfo) : NULL, true);
> if (s->sd == NULL) {
> -return -1;
This needs an error_setg (see device_realize in hw/core/qdev.c for an
example) until sd_init is changed to take Error *.
This is a repost of some previous patches written by Marc Marí which
were also reposted by Richard Jones a few months ago. The original
series and reposted series are here:
https://lists.gnu.org/archive/html/qemu-devel/2015-09/msg01995.html
From: Marc Mari
To simplify the addition of new block modules, add a script that generates
include/qemu/module_block.h automatically from the modules' source code.
This script assumes that the QEMU coding style rules are followed.
Signed-off-by: Marc Marí
Hello Eric,
On 06/13/2016 06:47 PM, Eric Blake wrote:
> On 06/13/2016 10:25 AM, Cédric Le Goater wrote:
>
>>
>> It seems that commit 243e6f69c129 ("m25p80: Switch to byte-based block
>> access")
>> is bringing another issue :
>>
>> qemu-system-arm:
>>
This patch is the result of coccinelle script
scripts/coccinelle/typecast.cocci
CC: Hitoshi Mitake
CC: qemu-block@nongnu.org
Signed-off-by: Laurent Vivier
---
block/sheepdog.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff
On 06/15/2016 09:38 AM, Eric Blake wrote:
> On 06/15/2016 09:17 AM, Max Reitz wrote:
>> On 15.06.2016 11:58, Kashyap Chamarthy wrote:
>>> Seems like supplying "qcow2" file BlockdevDriver option to QMP
>>> `blockdev-add` results in a SIGSEGV:
>>>
>>> [...]
>>> Thread 1 "qemu-system-x86"
This enables qemu to handle late inits and report errors. All the SSI
slave routine names were changed accordingly. Code was modified to
handle errors when possible (m25p80)
Tested with the m25p80 slave object.
Suggested-by: Paolo Bonzini
Signed-off-by: Cédric Le Goater
On 06/15/2016 09:36 AM, Max Reitz wrote:
> Emitting the plain error number is not very helpful. Use strerror()
> instead.
>
> Signed-off-by: Max Reitz
> ---
> qemu-img.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/qemu-img.c b/qemu-img.c
> index
On 06/15/2016 09:36 AM, Max Reitz wrote:
> We refuse to open images whose L1 table we deem "too big". Consequently,
> we should not produce such images ourselves.
>
> Cc: qemu-sta...@nongnu.org
> Signed-off-by: Max Reitz
> ---
> block/qcow2-cluster.c | 2 +-
> 1 file changed,
On 06/15/2016 09:17 AM, Max Reitz wrote:
> On 15.06.2016 11:58, Kashyap Chamarthy wrote:
>> Seems like supplying "qcow2" file BlockdevDriver option to QMP
>> `blockdev-add` results in a SIGSEGV:
>>
>> [...]
>> Thread 1 "qemu-system-x86" received signal SIGSEGV, Segmentation fault.
>>
We refuse to open images whose L1 table we deem "too big". Consequently,
we should not produce such images ourselves.
Cc: qemu-sta...@nongnu.org
Signed-off-by: Max Reitz
---
block/qcow2-cluster.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git
See https://bugs.launchpad.net/qemu/+bug/1592590 for a bug report.
Reproducer:
$ ./qemu-img create -f qcow2 test.qcow2 1M
Formatting 'test.qcow2', fmt=qcow2 size=1048576 encryption=off
cluster_size=65536 lazy_refcounts=off refcount_bits=16
$ ./qemu-img resize test.qcow2 10T
Image resized.
$
Emitting the plain error number is not very helpful. Use strerror()
instead.
Signed-off-by: Max Reitz
---
qemu-img.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/qemu-img.c b/qemu-img.c
index 14e2661..d5ccd9a 100644
--- a/qemu-img.c
+++ b/qemu-img.c
@@
On 15.06.2016 11:58, Kashyap Chamarthy wrote:
> Seems like supplying "qcow2" file BlockdevDriver option to QMP
> `blockdev-add` results in a SIGSEGV:
>
> [...]
> Thread 1 "qemu-system-x86" received signal SIGSEGV, Segmentation fault.
> 0x55a0121f in visit_type_BlockdevRef ()
On 15.06.2016 01:14, Eric Blake wrote:
> On 04/06/2016 12:28 PM, Max Reitz wrote:
>> Add a new option "address" to the NBD block driver which accepts a
>> SocketAddress.
>>
>> "path", "host" and "port" are still supported as legacy options and are
>> mapped to their corresponding SocketAddress
On 06/15/2016 04:20 PM, Paolo Bonzini wrote:
>
>
> On 15/06/2016 16:00, Cédric Le Goater wrote:
>> We also need to realize() the SSISlave part of the object. This is why
>> the previous realize() ops is stored in M25P80Class and called in the
>> object realize() ops.
>>
>> This is fully
On 15/06/2016 16:00, Cédric Le Goater wrote:
> We also need to realize() the SSISlave part of the object. This is why
> the previous realize() ops is stored in M25P80Class and called in the
> object realize() ops.
>
> This is fully compatible with the existing users of m25p80 and it
> provides
We also need to realize() the SSISlave part of the object. This is why
the previous realize() ops is stored in M25P80Class and called in the
object realize() ops.
This is fully compatible with the existing users of m25p80 and it
provides a way to handle errors on the drive backend.
On 06/15/2016 04:07 PM, Peter Maydell wrote:
> On 15 June 2016 at 15:00, Cédric Le Goater wrote:
>> We also need to realize() the SSISlave part of the object. This is why
>> the previous realize() ops is stored in M25P80Class and called in the
>> object realize() ops.
>>
>> This is
On 15 June 2016 at 15:00, Cédric Le Goater wrote:
> We also need to realize() the SSISlave part of the object. This is why
> the previous realize() ops is stored in M25P80Class and called in the
> object realize() ops.
>
> This is fully compatible with the existing users of m25p80
On 06/15/2016 03:34 PM, Eric Blake wrote:
On 06/15/2016 02:46 AM, Denis V. Lunev wrote:
On 06/15/2016 06:00 AM, Eric Blake wrote:
On 06/14/2016 09:25 AM, Denis V. Lunev wrote:
With a bdrv_co_write_zeroes method on a target BDS zeroes will not be
placed
into the wire. Thus the target could be
We also need to realize() the SSISlave part of the object. This is why
the previous realize() ops is stored in M25P80Class and called in the
object realize() ops.
This is fully compatible with the existing users of m25p80 and it
provides a way to handle errors on the drive backend.
On 14/06/2016 23:30, Eric Blake wrote:
> The NBD layer was breaking up request at a limit of 2040 sectors
> (just under 1M) to cater to old qemu-nbd. But the server limit
> was raised to 32M in commit 2d8214885 to match the kernel, more
> than three years ago; and the upstream NBD Protocol is
On 14/06/2016 23:30, Eric Blake wrote:
> We were basing the advertisement of maximum discard and transfer
> length off of UINT32_MAX, but since the rest of the block layer
> has signed int limits on a transaction, nothing could ever reach
> that maximum, and we risk overflowing an int once
On 06/15/2016 09:57 AM, Kevin Wolf wrote:
> Am 14.06.2016 um 18:02 hat Cédric Le Goater geschrieben:
>> On 06/14/2016 10:38 AM, Kevin Wolf wrote:
>>> Am 14.06.2016 um 10:02 hat Cédric Le Goater geschrieben:
>> #4 0x7fa81c6694ac in bdrv_aligned_pwritev (bs=0x7fa81d4dd050,
>> req=,
On 15/06/2016 13:16, Kevin Wolf wrote:
> linux-aio uses a BH in order to make sure that the remaining completions
> are processed even in nested event loops of completion callbacks in
> order to avoid deadlocks.
>
> There is no need, however, to have the BH overhead for the first call
> into
On 06/15/2016 02:46 AM, Denis V. Lunev wrote:
> On 06/15/2016 06:00 AM, Eric Blake wrote:
>> On 06/14/2016 09:25 AM, Denis V. Lunev wrote:
>>> With a bdrv_co_write_zeroes method on a target BDS zeroes will not be
>>> placed
>>> into the wire. Thus the target could be very efficiently zeroed out.
On 06/15/2016 02:41 AM, Denis V. Lunev wrote:
> On 06/15/2016 05:36 AM, Eric Blake wrote:
>> On 06/14/2016 09:25 AM, Denis V. Lunev wrote:
>>> There is no need to scan allocation tables if we have mark_all_dirty
>>> flag
>>> set. Just mark it all dirty.
>>>
>>> int ret, n;
>>> end =
On Wed, Jun 15, 2016 at 11:27:21AM +0100, Alex Bligh wrote:
> Perhaps this should read "If an error occurs, the server MUST either initiate
> a hard disconnect before the entire payload has been sent or
> set the appropriate code in the error field and send the response header
> without any
On Wed, Jun 15, 2016 at 11:58:31AM +0200, Kashyap Chamarthy wrote:
> Seems like supplying "qcow2" file BlockdevDriver option to QMP
> `blockdev-add` results in a SIGSEGV:
>
> [...]
> Thread 1 "qemu-system-x86" received signal SIGSEGV, Segmentation fault.
> 0x55a0121f in
linux-aio uses a BH in order to make sure that the remaining completions
are processed even in nested event loops of completion callbacks in
order to avoid deadlocks.
There is no need, however, to have the BH overhead for the first call
into qemu_laio_completion_bh() or after all pending
* Denis V. Lunev (d...@openvz.org) wrote:
> Block commit of the active image to the backing store on a slow disk
> could never end. For example with the guest with the following loop
> inside
> while true; do
> dd bs=1k count=1 if=/dev/zero of=x
> done
> running above slow storage
On 06/15/2016 12:19 PM, Stefan Hajnoczi wrote:
On Tue, Jun 14, 2016 at 09:20:47PM -0600, Eric Blake wrote:
On 06/14/2016 09:25 AM, Denis V. Lunev wrote:
We should not take into account zero blocks for delay calculations.
They are not read and thus IO throttling is not required. In the
other
On 06/15/2016 01:25 PM, Kevin Wolf wrote:
Am 15.06.2016 um 11:34 hat Denis V. Lunev geschrieben:
On 06/15/2016 12:06 PM, Kevin Wolf wrote:
The second big thing is that I don't want to see new users of the
notifiers in I/O functions. Let's try if we can't add a filter
BlockDriver instead. Then
On 15/06/2016 12:27, Alex Bligh wrote:
>
> On 15 Jun 2016, at 10:18, Paolo Bonzini wrote:
>
>>> So what should those servers do (like 2 of mine) which don't buffer
>>> the entire read, if they get an error having already sent some data?
>>
>> They have sent an error code
Am 15.06.2016 um 11:02 hat Stefan Hajnoczi geschrieben:
> On Tue, Jun 14, 2016 at 03:32:29PM +0200, Kevin Wolf wrote:
> > Previous series have already converted some block drivers to byte-based
> > rather
> > than sector-based interfaces. However, the common I/O path as well as
> > raw-posix
> >
On 15 Jun 2016, at 10:18, Paolo Bonzini wrote:
>> So what should those servers do (like 2 of mine) which don't buffer
>> the entire read, if they get an error having already sent some data?
>
> They have sent an error code of zero, and it turned out to be wrong. So
> the
Am 15.06.2016 um 11:34 hat Denis V. Lunev geschrieben:
> On 06/15/2016 12:06 PM, Kevin Wolf wrote:
> >The second big thing is that I don't want to see new users of the
> >notifiers in I/O functions. Let's try if we can't add a filter
> >BlockDriver instead. Then we'd add an option to set the
Seems like supplying "qcow2" file BlockdevDriver option to QMP
`blockdev-add` results in a SIGSEGV:
[...]
Thread 1 "qemu-system-x86" received signal SIGSEGV, Segmentation fault.
0x55a0121f in visit_type_BlockdevRef ()
[...]
Reproducer
--
Tested with:
On Tue, Jun 14, 2016 at 06:25:07PM +0300, Denis V. Lunev wrote:
> Block commit of the active image to the backing store on a slow disk
> could never end. For example with the guest with the following loop
> inside
> while true; do
> dd bs=1k count=1 if=/dev/zero of=x
> done
>
On Tue, Jun 14, 2016 at 06:25:15PM +0300, Denis V. Lunev wrote:
> Block commit of the active image to the backing store on a slow disk
> could never end. For example with the guest with the following loop
> inside
> while true; do
> dd bs=1k count=1 if=/dev/zero of=x
> done
>
On 06/15/2016 12:06 PM, Kevin Wolf wrote:
Am 14.06.2016 um 17:25 hat Denis V. Lunev geschrieben:
Block commit of the active image to the backing store on a slow disk
could never end. For example with the guest with the following loop
inside
while true; do
dd bs=1k count=1
On Tue, Jun 14, 2016 at 09:20:47PM -0600, Eric Blake wrote:
> On 06/14/2016 09:25 AM, Denis V. Lunev wrote:
> > We should not take into account zero blocks for delay calculations.
> > They are not read and thus IO throttling is not required. In the
> > other case VM migration with 16 Tb QCOW2 disk
On Tue, Jun 14, 2016 at 06:25:13PM +0300, Denis V. Lunev wrote:
> Signed-off-by: Denis V. Lunev
> Reviewed-by: Vladimir Sementsov-Ogievskiy
> CC: Stefan Hajnoczi
> CC: Fam Zheng
> CC: Kevin Wolf
On Tue, Jun 14, 2016 at 06:25:13PM +0300, Denis V. Lunev wrote:
> Signed-off-by: Denis V. Lunev
> Reviewed-by: Vladimir Sementsov-Ogievskiy
> CC: Stefan Hajnoczi
> CC: Fam Zheng
> CC: Kevin Wolf
- Original Message -
> From: "Alex Bligh"
> To: "Wouter Verhelst"
> Cc: "Alex Bligh" , nbd-gene...@lists.sourceforge.net,
> "Paolo Bonzini" ,
> qemu-de...@nongnu.org, "qemu block"
> Sent:
Am 14.06.2016 um 17:25 hat Denis V. Lunev geschrieben:
> Block commit of the active image to the backing store on a slow disk
> could never end. For example with the guest with the following loop
> inside
> while true; do
> dd bs=1k count=1 if=/dev/zero of=x
> done
> running above
On Tue, Jun 14, 2016 at 03:32:29PM +0200, Kevin Wolf wrote:
> Previous series have already converted some block drivers to byte-based rather
> than sector-based interfaces. However, the common I/O path as well as
> raw-posix
> still enforced a minimum alignment of 512 bytes because some
On 06/15/2016 07:18 AM, Eric Blake wrote:
On 06/14/2016 09:25 AM, Denis V. Lunev wrote:
Block commit of the active image to the backing store on a slow disk
could never end. For example with the guest with the following loop
inside
while true; do
dd bs=1k count=1 if=/dev/zero of=x
> On 15 Jun 2016, at 09:03, Wouter Verhelst wrote:
>
> On Wed, Jun 15, 2016 at 09:05:22AM +0200, Wouter Verhelst wrote:
>> There are more clients than the Linux and qemu ones, but I think it's
>> fair to say that those two are the most important ones. If they agree
>> that a read
On 06/15/2016 06:00 AM, Eric Blake wrote:
On 06/14/2016 09:25 AM, Denis V. Lunev wrote:
With a bdrv_co_write_zeroes method on a target BDS zeroes will not be placed
into the wire. Thus the target could be very efficiently zeroed out. This
is should be done with the largest chunk possible.
This
On 06/15/2016 05:36 AM, Eric Blake wrote:
On 06/14/2016 09:25 AM, Denis V. Lunev wrote:
There is no need to scan allocation tables if we have mark_all_dirty flag
set. Just mark it all dirty.
Signed-off-by: Denis V. Lunev
Reviewed-by: Vladimir
On Wed, Jun 15, 2016 at 09:05:22AM +0200, Wouter Verhelst wrote:
> There are more clients than the Linux and qemu ones, but I think it's
> fair to say that those two are the most important ones. If they agree
> that a read reply which errors should come without payload, then I think
> we should
Am 13.06.2016 um 17:36 hat Kevin Wolf geschrieben:
> Am 13.06.2016 um 13:30 hat Daniel P. Berrange geschrieben:
> > So rather than fix the crash, and backport it to stable
> > releases, just go ahead with what we have warned users about
> > and disable any use of qcow2 encryption in the system
> >
Am 14.06.2016 um 18:02 hat Cédric Le Goater geschrieben:
> On 06/14/2016 10:38 AM, Kevin Wolf wrote:
> > Am 14.06.2016 um 10:02 hat Cédric Le Goater geschrieben:
> #4 0x7fa81c6694ac in bdrv_aligned_pwritev (bs=0x7fa81d4dd050,
> req=, offset=30878208,
> bytes=512,
Am 14.06.2016 um 18:13 hat Max Reitz geschrieben:
> On 14.06.2016 17:54, John Snow wrote:
> >
> >
> > On 06/14/2016 09:19 AM, Max Reitz wrote:
> >> On 10.06.2016 23:59, John Snow wrote:
> >>> If a device still has an attached BDS because the medium has not yet
> >>> been removed, we will be
On Tue, Jun 14, 2016 at 04:02:15PM +0100, Alex Bligh wrote:
>
> On 14 Jun 2016, at 14:32, Paolo Bonzini wrote:
>
> >
> > On 13/06/2016 23:41, Alex Bligh wrote:
> >> That's one of the reasons that there is a proposal to add
> >> STRUCTURED_READ to the spec (although I still
On Mon, Jun 13, 2016 at 10:41:05PM +0100, Alex Bligh wrote:
> For amusement value, the non-threaded handler (which is not used
> any more) does not send any payload on an error:
> https://github.com/yoe/nbd/blob/master/nbd-server.c#L1734
nbd-server used to just drop the connection on read error.
80 matches
Mail list logo