read.c
index b4d6b7efc3..5149fcf63a 100644
--- a/block/copy-on-read.c
+++ b/block/copy-on-read.c
@@ -146,11 +146,11 @@ cor_co_preadv_part(BlockDriverState *bs, int64_t offset,
int64_t bytes,
local_flags = flags;
/* In case of failure, try to copy-on-read anyway */
-ret = bdr
(BlockDriverState *bs, int64_t offset,
int64_t bytes,
local_flags = flags;
/* In case of failure, try to copy-on-read anyway */
-ret = bdrv_is_allocated(bs->file->bs, offset, bytes, &n);
+ret = bdrv_co_is_allocated(bs->file->bs, offset,
or_co_preadv_part(BlockDriverState *bs, int64_t offset,
int64_t bytes,
local_flags = flags;
/* In case of failure, try to copy-on-read anyway */
-ret = bdrv_is_allocated(bs->file->bs, offset, bytes, &n);
+ret = bdrv_co_is_allocated(bs->file->bs, offset,
On Wed, Apr 05, 2023 at 12:32:16PM +0200, Paolo Bonzini wrote:
> Signed-off-by: Paolo Bonzini
> ---
> block/copy-before-write.c | 2 +-
> block/copy-on-read.c | 8
> block/io.c| 6 +++---
> block/mirror.c| 10 +-
> block/qcow2.c |
s, int64_t offset,
int64_t bytes,
local_flags = flags;
/* In case of failure, try to copy-on-read anyway */
-ret = bdrv_is_allocated(bs->file->bs, offset, bytes, &n);
+ret = bdrv_co_is_allocated(bs->file->bs, offset, bytes, &n);
On 13/2/22 15:24, 沈梦姣 wrote:
Hi,
I’m trying to understand this function, but seems no note in the header file,
could anyone help explain this function? It will be great if there is an
example. Thanks in advance!
thanks
Cc'ing qemu-block@ list.
Hi,
I’m trying to understand this function, but seems no note in the header file,
could anyone help explain this function? It will be great if there is an
example. Thanks in advance!
thanks
From: Eric Blake
Not all callers care about which BDS owns the mapping for a given
range of the file, or where the zeroes lie within that mapping. In
particular, bdrv_is_allocated() cares more about finding the
largest run of allocated data from the guest perspective, whether
or not that data
Not all callers care about which BDS owns the mapping for a given
range of the file, or where the zeroes lie within that mapping. In
particular, bdrv_is_allocated() cares more about finding the
largest run of allocated data from the guest perspective, whether
or not that data is consecutive from
Not all callers care about which BDS owns the mapping for a given
range of the file. In particular, bdrv_is_allocated() cares more
about finding the largest run of allocated data from the guest
perspective, whether or not that data is consecutive from the
host perspective, and whether or not the
On 09/26/2017 01:31 PM, John Snow wrote:
>
>
> On 09/13/2017 12:03 PM, Eric Blake wrote:
>> Not all callers care about which BDS owns the mapping for a given
>> range of the file. In particular, bdrv_is_allocated() cares more
>> about finding the largest run of al
On 09/13/2017 12:03 PM, Eric Blake wrote:
> Not all callers care about which BDS owns the mapping for a given
> range of the file. In particular, bdrv_is_allocated() cares more
> about finding the largest run of allocated data from the guest
> perspective, whether or not
Not all callers care about which BDS owns the mapping for a given
range of the file. In particular, bdrv_is_allocated() cares more
about finding the largest run of allocated data from the guest
perspective, whether or not that data is consecutive from the
host perspective. Therefore, doing
Not all callers care about which BDS owns the mapping for a given
range of the file. In particular, bdrv_is_allocated() cares more
about finding the largest run of allocated data from the guest
perspective, whether or not that data is consecutive from the
host perspective. Therefore, doing
hand, no rounding is needed for callers
that should just continue to work with byte alignment.
For the most part this patch is just the addition of scaling at the
callers followed by inverse scaling at bdrv_is_allocated(). But
some code, particularly bdrv_commit(), gets a lot simpler because it
no
efore started the task of
> converting our block status code to report at a byte granularity
> rather than sectors.
>
> The overall conversion currently looks like:
> part 1: bdrv_is_allocated (this series, v4 was at [1])
> part 2: dirty-bitmap (v4 is posted [2]; needs reviews)
needed for callers
that should just continue to work with byte alignment.
For the most part this patch is just the addition of scaling at the
callers followed by inverse scaling at bdrv_is_allocated(). But
some code, particularly bdrv_commit(), gets a lot simpler because it
no longer has to mess
at a byte granularity
rather than sectors.
The overall conversion currently looks like:
part 1: bdrv_is_allocated (this series, v4 was at [1])
part 2: dirty-bitmap (v4 is posted [2]; needs reviews)
part 3: bdrv_get_block_status (v2 is posted [3] and is mostly reviewed)
part 4: upcoming series
> >> }
> >>
> >> /* If the cluster is allocated, we don't need to take action
> >> */
> >> -ret = bdrv_is_allocated(bs, sector, n, &n);
> >> +ret = bdrv_is_allocated(bs, sector << B
>> float local_progress = 0;
>>
>> buf_old = blk_blockalign(blk, IO_BUF_SIZE);
>> @@ -3276,12 +3277,14 @@ static int img_rebase(int argc, char **argv)
>> }
>>
>> /* If the cluster is allocated, we don'
work with byte alignment.
>>
>> For the most part this patch is just the addition of scaling at the
>> callers followed by inverse scaling at bdrv_is_allocated(). But
>> some code, particularly bdrv_commit(), gets a lot simpler because it
>> no longer has to mess with secto
aligned
> values, where the call might reasonbly give non-aligned results
> in the future; on the other hand, no rounding is needed for callers
> that should just continue to work with byte alignment.
>
> For the most part this patch is just the addition of scaling at the
> callers
On 07/03/2017 05:14 PM, Eric Blake wrote:
> Not all callers care about which BDS owns the mapping for a given
> range of the file. In particular, bdrv_is_allocated() cares more
> about finding the largest run of allocated data from the guest
> perspective, whether or not that data is
needed for callers
that should just continue to work with byte alignment.
For the most part this patch is just the addition of scaling at the
callers followed by inverse scaling at bdrv_is_allocated(). But
some code, particularly bdrv_commit(), gets a lot simpler because it
no longer has to mess
at a byte granularity
rather than sectors.
The overall conversion currently looks like:
part 1: bdrv_is_allocated (this series, v3 was at [1])
part 2: dirty-bitmap (v4 is posted [2]; needs reviews)
part 3: bdrv_get_block_status (v2 is posted [3] and is mostly reviewed)
part 4: upcoming series
On Wed, 07/05 09:01, Eric Blake wrote:
> On 07/05/2017 07:07 AM, Fam Zheng wrote:
> >>>
> >>> Sorry for bikeshedding.
> >>
> >> Not a problem, I also had some double-takes in writing my own code
> >> trying to remember which way I wanted the 'allocation' boolean to be
> >> set, so coming up with a
On 07/05/2017 07:07 AM, Fam Zheng wrote:
>>>
>>> Sorry for bikeshedding.
>>
>> Not a problem, I also had some double-takes in writing my own code
>> trying to remember which way I wanted the 'allocation' boolean to be
>> set, so coming up with a more intuitive name/default state in order to
>> help
e" means BDRV_BLOCK_OFFSET_VALID is wanted.
>
> Reasonable idea; other [shorter] names I've been toying with:
> strict
> mapping
> precise
>
> any of which, if true (set true by bdrv_get_block_status), means that I
> care more about BDRV_BLOCK_OFFSET_VALID and val
cise
any of which, if true (set true by bdrv_get_block_status), means that I
care more about BDRV_BLOCK_OFFSET_VALID and validity for learning host
offsets, if false it means I'm okay getting a larger *pnum even if it
extends over disjoint host offsets; or:
fast
which if true (set true b
On Mon, 07/03 17:14, Eric Blake wrote:
> @@ -1717,6 +1718,10 @@ int64_t coroutine_fn
> bdrv_co_get_block_status_from_backing(BlockDriverState *bs,
> * Drivers not implementing the functionality are assumed to not support
> * backing files, hence all their sectors are reported as allocated.
>
Not all callers care about which BDS owns the mapping for a given
range of the file. In particular, bdrv_is_allocated() cares more
about finding the largest run of allocated data from the guest
perspective, whether or not that data is consecutive from the
host perspective. Therefore, doing
On 06/27/2017 02:24 PM, Eric Blake wrote:
> We are gradually moving away from sector-based interfaces, towards
> byte-based. In the common case, allocation is unlikely to ever use
> values that are not naturally sector-aligned, but it is possible
> that byte-based values will let us be more precis
l want aligned
> values, where the call might reasonbly give non-aligned results
> in the future; on the other hand, no rounding is needed for callers
> that should just continue to work with byte alignment.
>
> For the most part this patch is just the addition of scaling at the
>
l might reasonbly give non-aligned results
> in the future; on the other hand, no rounding is needed for callers
> that should just continue to work with byte alignment.
>
> For the most part this patch is just the addition of scaling at the
> callers followed by inverse scali
needed for callers
that should just continue to work with byte alignment.
For the most part this patch is just the addition of scaling at the
callers followed by inverse scaling at bdrv_is_allocated(). But
some code, particularly bdrv_commit(), gets a lot simpler because it
no longer has to mess
at a byte granularity
rather than sectors.
This is part one of that conversion: bdrv_is_allocated().
Other parts still need a v3, but here's the link to their most
recent posting:
tracking dirty bitmaps by bytes:
https://lists.gnu.org/archive/html/qemu-devel/2017-05/msg03859.html
gt; values, where the call might reasonbly give non-aligned results
> in the future; on the other hand, no rounding is needed for callers
> that should just continue to work with byte alignment.
>
> For the most part this patch is just the addition of scaling at the
> callers follow
aturally aligned to sectors or even
>> much higher levels). I've therefore started the task of
>> converting our block status code to report at a byte granularity
>> rather than sectors.
>>
>> This is part one of that conversion: bdrv_is_allocated().
>> Other
efore started the task of
> converting our block status code to report at a byte granularity
> rather than sectors.
>
> This is part one of that conversion: bdrv_is_allocated().
> Other parts still need a v2, but here's the link to their v1:
> tracking dirty bitmaps by bytes:
&
needed for callers
that should just continue to work with byte alignment.
For the most part this patch is just the addition of scaling at the
callers followed by inverse scaling at bdrv_is_allocated(). But
some code, particularly bdrv_commit(), gets a lot simpler because it
no longer has to mess
at a byte granularity
rather than sectors.
This is part one of that conversion: bdrv_is_allocated().
Other parts still need a v2, but here's the link to their v1:
tracking dirty bitmaps by bytes:
https://lists.gnu.org/archive/html/qemu-devel/2017-04/msg02163.html
replacing bdrv_get_block_statu
;> /* Skip unallocated sectors; intentionally treats failure as
>>>> * an allocated sector */
>>>> while (cur_sector < total_sectors &&
>>>> - !bdrv_is_allocated(blk_bs(bb), cur_sector,
>>>> -
On 08/05/2017 22:54, Stefan Hajnoczi wrote:
> On Fri, May 05, 2017 at 04:03:49PM +0800, jemmy858...@gmail.com wrote:
>> From: Lidong Chen
>>
>> when block migration with high-speed, mig_save_device_bulk hold the
>> BQL and invoke bdrv_is_allocated f
speed, mig_save_device_bulk hold the
> >> BQL and invoke bdrv_is_allocated frequently. This patch moves
> >> bdrv_is_allocated() into bb's AioContext. It will execute without
> >> blocking other I/O activity.
> >>
> >> Signed-off-by: Lidong Chen
&g
On Tue, May 9, 2017 at 4:54 AM, Stefan Hajnoczi wrote:
> On Fri, May 05, 2017 at 04:03:49PM +0800, jemmy858...@gmail.com wrote:
>> From: Lidong Chen
>>
>> when block migration with high-speed, mig_save_device_bulk hold the
>> BQL and invoke bdrv_is_allocated f
On Fri, May 05, 2017 at 04:03:49PM +0800, jemmy858...@gmail.com wrote:
> From: Lidong Chen
>
> when block migration with high-speed, mig_save_device_bulk hold the
> BQL and invoke bdrv_is_allocated frequently. This patch moves
> bdrv_is_allocated() into bb's AioContext. It
From: Lidong Chen
when block migration with high-speed, mig_save_device_bulk hold the
BQL and invoke bdrv_is_allocated frequently. This patch moves
bdrv_is_allocated() into bb's AioContext. It will execute without
blocking other I/O activity.
Signed-off-by: Lidong Chen
---
v4 chan
t;>> * an allocated sector */
>>> while (cur_sector < total_sectors &&
>>> - !bdrv_is_allocated(blk_bs(bb), cur_sector,
>>> - MAX_IS_ALLOCATED_SEARCH, &nr_sectors)) {
>>> -
le (cur_sector < total_sectors &&
>> - !bdrv_is_allocated(blk_bs(bb), cur_sector,
>> - MAX_IS_ALLOCATED_SEARCH, &nr_sectors)) {
>> -cur_sector += nr_sectors;
>> +
callers are sector-aligned,
> but that can be relaxed when a later patch implements byte-based
> block status. Therefore, for the most part this patch is just the
> addition of scaling at the callers followed by inverse scaling at
> bdrv_is_allocated(). But some code, particularly bdrv
t;>> * an allocated sector */
>>> while (cur_sector < total_sectors &&
>>> - !bdrv_is_allocated(blk_bs(bb), cur_sector,
>>> - MAX_IS_ALLOCATED_SEARCH, &nr_sectors)) {
>>> -
le (cur_sector < total_sectors &&
>> - !bdrv_is_allocated(blk_bs(bb), cur_sector,
>> - MAX_IS_ALLOCATED_SEARCH, &nr_sectors)) {
>> -cur_sector += nr_sectors;
>> +
callers are sector-aligned,
> but that can be relaxed when a later patch implements byte-based
> block status. Therefore, for the most part this patch is just the
> addition of scaling at the callers followed by inverse scaling at
> bdrv_is_allocated(). But some code, particularly bdrv
to sectors or even
>> much higher levels). I've therefore started the task of
>> converting our block status code to report at a byte granularity
>> rather than sectors.
>>
>> This is part one of that conversion: bdrv_is_allocated().
>> Other parts (still t
started the task of
> converting our block status code to report at a byte granularity
> rather than sectors.
>
> This is part one of that conversion: bdrv_is_allocated().
> Other parts (still to be written) include tracking dirty bitmaps
> by bytes (it's still one bit per g
-based
block status. Therefore, for the most part this patch is just the
addition of scaling at the callers followed by inverse scaling at
bdrv_is_allocated(). But some code, particularly bdrv_commit(),
gets a lot simpler because it no longer has to mess with sectors;
also, it is now possible to pass
at a byte granularity
rather than sectors.
This is part one of that conversion: bdrv_is_allocated().
Other parts (still to be written) include tracking dirty bitmaps
by bytes (it's still one bit per granularity, but now we won't
be double-scaling from bytes to sectors to granularity),
From: Eric Blake
Migration is the only code left in the tree that does not react
to bdrv_is_allocated() failures. But as there is no useful way
to react to the failure, and we are merely skipping unallocated
sectors on success, just document that our choice of handling
is intended.
Signed-off
From: Eric Blake
If bdrv_is_allocated() fails, we should react to that failure.
For 2 of the 3 callers, reporting the error was easy. But in
cluster_was_modified() and its lone caller
get_cluster_count_for_direntry(), it's rather invasive to update
the logic to pass the error back; so the
From: Eric Blake
If bdrv_is_allocated() fails, we should immediately do the backup
error action, rather than attempting backup_do_cow() (although
that will likely fail too).
Signed-off-by: Eric Blake
Signed-off-by: Kevin Wolf
---
block/backup.c | 14 ++
1 file changed, 10
Am 08.03.2017 um 22:34 hat Eric Blake geschrieben:
> bdrv_is_allocated() returns tri-state, not just bool, although
> there were several callers using it as a bool. Fix them to
> either propagate the error or to document why treatment of
> failure like allocation is okay.
>
&g
If bdrv_is_allocated() fails, we should immediately do the backup
error action, rather than attempting backup_do_cow() (although
that will likely fail too).
Signed-off-by: Eric Blake
---
block/backup.c | 14 ++
1 file changed, 10 insertions(+), 4 deletions(-)
diff --git a/block
If bdrv_is_allocated() fails, we should react to that failure.
For 2 of the 3 callers, reporting the error was easy. But in
cluster_was_modified() and its lone caller
get_cluster_count_for_direntry(), it's rather invasive to update
the logic to pass the error back; so there, I went with m
bdrv_is_allocated() returns tri-state, not just bool, although
there were several callers using it as a bool. Fix them to
either propagate the error or to document why treatment of
failure like allocation is okay.
[Found during a larger effort to convert bdrv_get_block_status
to be byte-based
Migration is the only code left in the tree that does not react
to bdrv_is_allocated() failures. But as there is no useful way
to react to the failure, and we are merely skipping unallocated
sectors on success, just document that our choice of handling
is intended.
Signed-off-by: Eric Blake
bdrv_is_allocated() should return either 0 or 1 in successful cases.
We're lucky that currently, the callers that rely on this (e.g. because
they check for ret == 1) don't seem to break badly. They just might skip
some optimisation or in the case of qemu-io 'map' print se
bdrv_is_allocated() should return either 0 or 1 in successful cases.
We're lucky that currently, the callers that rely on this (e.g. because
they check for ret == 1) don't seem to break badly. They just might skip
some optimisation or in the case of qemu-io 'map' print se
Am 07.07.2014 um 17:37 hat Kevin Wolf geschrieben:
> bdrv_is_allocated() should return either 0 or 1 in successful cases.
> We're lucky that currently, the callers that rely on this (e.g. because
> they check for ret == 1) don't seem to break badly. They just might skip
> s
On 07/07/2014 09:37 AM, Kevin Wolf wrote:
> bdrv_is_allocated() should return either 0 or 1 in successful cases.
> We're lucky that currently, the callers that rely on this (e.g. because
> they check for ret == 1) don't seem to break badly. They just might skip
> some optim
bdrv_is_allocated() should return either 0 or 1 in successful cases.
We're lucky that currently, the callers that rely on this (e.g. because
they check for ret == 1) don't seem to break badly. They just might skip
some optimisation or in the case of qemu-io 'map' print se
bdrv_is_allocated() shouldn't return true for sectors that are
unallocated, but after the end of a short backing file, even though
such sectors are (correctly) marked as containing zeros.
Signed-off-by: Kevin Wolf
Reviewed-by: Max Reitz
---
block.c | 10 ++
include/
On 07.05.2014 10:31, Kevin Wolf wrote:
bdrv_is_allocated() shouldn't return true for sectors that are
unallocated, but after the end of a short backing file, even though
such sectors are (correctly) marked as containing zeros.
Signed-off-by: Kevin Wolf
---
v2:
- Set BDRV_BLOCK_ALLOCATE
bdrv_is_allocated() shouldn't return true for sectors that are
unallocated, but after the end of a short backing file, even though
such sectors are (correctly) marked as containing zeros.
Signed-off-by: Kevin Wolf
---
v2:
- Set BDRV_BLOCK_ALLOCATED for !drv->bdrv_co_get_block_sta
Am 06.05.2014 um 21:53 hat Max Reitz geschrieben:
> On 06.05.2014 15:30, Kevin Wolf wrote:
> >bdrv_is_allocated() shouldn't return true for sectors that are
> >unallocated, but after the end of a short backing file, even though
> >such sectors are (correctly)
On 06.05.2014 15:30, Kevin Wolf wrote:
bdrv_is_allocated() shouldn't return true for sectors that are
unallocated, but after the end of a short backing file, even though
such sectors are (correctly) marked as containing zeros.
Signed-off-by: Kevin Wolf
---
block.c
Il 06/05/2014 15:30, Kevin Wolf ha scritto:
bdrv_is_allocated() shouldn't return true for sectors that are
unallocated, but after the end of a short backing file, even though
such sectors are (correctly) marked as containing zeros.
Signed-off-by: Kevin Wolf
Nice. :)
Paolo
bdrv_is_allocated() shouldn't return true for sectors that are
unallocated, but after the end of a short backing file, even though
such sectors are (correctly) marked as containing zeros.
Signed-off-by: Kevin Wolf
---
block.c | 8 +---
include/block/block.h
From: Stefan Hajnoczi
There is no need for bdrv_commit() to use the BlockDriver
.bdrv_is_allocated() interface directly. Converting to the public
interface gives us the freedom to drop .bdrv_is_allocated() entirely in
favor of a new .bdrv_co_is_allocated() in the future.
Signed-off-by: Stefan
From: Stefan Hajnoczi
Now that all block drivers have been converted to
.bdrv_co_is_allocated() we can drop .bdrv_is_allocated().
Note that the public bdrv_is_allocated() interface is still available
but is in fact a synchronous wrapper around .bdrv_co_is_allocated().
Signed-off-by: Stefan
Am 14.11.2011 13:44, schrieb Stefan Hajnoczi:
> The bdrv_is_allocated() interface is not suitable for use while the VM is
> running. It is a synchronous interface so it may block the running VM for
> arbitrary amounts of time. It also assumes it is the only block driver
> operation
There is no need for bdrv_commit() to use the BlockDriver
.bdrv_is_allocated() interface directly. Converting to the public
interface gives us the freedom to drop .bdrv_is_allocated() entirely in
favor of a new .bdrv_co_is_allocated() in the future.
Signed-off-by: Stefan Hajnoczi
---
block.c
Now that all block drivers have been converted to
.bdrv_co_is_allocated() we can drop .bdrv_is_allocated().
Note that the public bdrv_is_allocated() interface is still available
but is in fact a synchronous wrapper around .bdrv_co_is_allocated().
Signed-off-by: Stefan Hajnoczi
---
block.c
The bdrv_is_allocated() interface is not suitable for use while the VM is
running. It is a synchronous interface so it may block the running VM for
arbitrary amounts of time. It also assumes it is the only block driver
operation and there is a risk that internal state could be corrupted if
Now that all block drivers have been converted to
.bdrv_co_is_allocated() we can drop .bdrv_is_allocated().
Note that the public bdrv_is_allocated() interface is still available
but is in fact a synchronous wrapper around .bdrv_co_is_allocated().
Signed-off-by: Stefan Hajnoczi
---
block.c
There is no need for bdrv_commit() to use the BlockDriver
.bdrv_is_allocated() interface directly. Converting to the public
interface gives us the freedom to drop .bdrv_is_allocated() entirely in
favor of a new .bdrv_co_is_allocated() in the future.
Signed-off-by: Stefan Hajnoczi
---
block.c
The bdrv_is_allocated() interface is not suitable for use while the VM is
running. It is a synchronous interface so it may block the running VM for
arbitrary amounts of time. It also assumes it is the only block driver
operation and there is a risk that internal state could be corrupted if
On Thu, Jun 16, 2011 at 9:10 AM, Dmitry Konishchev wrote:
> On Wed, Jun 15, 2011 at 5:57 PM, Stefan Hajnoczi wrote:
>> Anyway, bdrv_getlength() will return the total_sectors value instead
>> of calling into raw-posix.c .bdrv_getlength(). That's why it should
>> be cheap.
>
> Yeah, I see it now a
On Wed, Jun 15, 2011 at 5:57 PM, Stefan Hajnoczi wrote:
> Anyway, bdrv_getlength() will return the total_sectors value instead
> of calling into raw-posix.c .bdrv_getlength(). That's why it should
> be cheap.
Yeah, I see it now after a closer look in the drivers code. It looks
like I get this 9%
2011/6/15 Dmitry Konishchev :
> On Wed, Jun 15, 2011 at 5:33 PM, Stefan Hajnoczi wrote:
>> "disable caching"?
>
> Image geometry caching. I meant If I call bdrv_get_geometry() every
> time I need image geometry instead of obtaining it from bs_geometry
> variable.
Haha, sorry. Too much caching: -
On Wed, Jun 15, 2011 at 5:33 PM, Stefan Hajnoczi wrote:
> "disable caching"?
Image geometry caching. I meant If I call bdrv_get_geometry() every
time I need image geometry instead of obtaining it from bs_geometry
variable.
--
Дмитрий Конищев (Dmitry Konishchev)
mailto:konishc...@gmail.com
On Wed, Jun 15, 2011 at 2:14 PM, Dmitry Konishchev wrote:
> On Wed, Jun 15, 2011 at 4:02 PM, Stefan Hajnoczi wrote:
>> We need to fully understand performance before applying optimizations
>> on top. Otherwise it is possible to paper over a problem while
>> leaving the root cause unsolved. Avoi
On Wed, Jun 15, 2011 at 4:02 PM, Stefan Hajnoczi wrote:
> We need to fully understand performance before applying optimizations
> on top. Otherwise it is possible to paper over a problem while
> leaving the root cause unsolved. Avoiding lseek(2) is very important,
> not just for qemu-img but als
On Wed, Jun 15, 2011 at 10:50 AM, Dmitry Konishchev
wrote:
> On Wed, Jun 15, 2011 at 12:39 PM, Stefan Hajnoczi wrote:
>> Why is bdrv_get_geometry() slow?
>
> Mmm.. Frankly, I haven't looked so deep, but it is going to be slow at
> least for raw images due to using lseek().
We need to fully under
continue;
+}
+
+if (bs_sector + n <= cur_sectors) {
+cur_n = n;
+} else {
+cur_n = cur_sectors - bs_sector;
+}
+
+if (bdrv_is_allocated(cur_bs, bs_sector, cur_n, &allocated_num)) {
+
On Wed, Jun 15, 2011 at 12:39 PM, Stefan Hajnoczi wrote:
> Why is bdrv_get_geometry() slow?
Mmm.. Frankly, I haven't looked so deep, but it is going to be slow at
least for raw images due to using lseek().
--
Dmitry Konishchev
mailto:konishc...@gmail.com
On Wed, Jun 15, 2011 at 8:38 AM, Dmitry Konishchev wrote:
> On Tue, Jun 14, 2011 at 7:58 PM, Stefan Hajnoczi wrote:
>> Yes, please.
>
> OK, I'll do it as soon I'll find time for it.
>
>
> On Tue, Jun 14, 2011 at 7:58 PM, Stefan Hajnoczi wrote:
>> For image files the block layer should be caching
On Tue, Jun 14, 2011 at 7:58 PM, Stefan Hajnoczi wrote:
> Yes, please.
OK, I'll do it as soon I'll find time for it.
On Tue, Jun 14, 2011 at 7:58 PM, Stefan Hajnoczi wrote:
> For image files the block layer should be caching the device capacity (size)
> anyway, so you probably don't need to al
On Tue, Jun 14, 2011 at 8:43 AM, Dmitry Konishchev wrote:
> On Mon, Jun 13, 2011 at 1:13 PM, Dmitry Konishchev
> wrote:
>> I haven't done this because in this case I have to pass too lot of
>> local variables to this function. Just not sure that it'll look
>> better. But if you mind I surely can
On Mon, Jun 13, 2011 at 1:13 PM, Dmitry Konishchev wrote:
> I haven't done this because in this case I have to pass too lot of
> local variables to this function. Just not sure that it'll look
> better. But if you mind I surely can do this.
Should I?
--
Dmitry Konishchev
mailto:konishc...@gmail.
On Mon, Jun 13, 2011 at 12:26 PM, Stefan Hajnoczi wrote:
> The optimization is to check allocation metadata instead of
> unconditionally reading and then checking for all zeroes?
Yeah, exactly.
On Mon, Jun 13, 2011 at 12:26 PM, Stefan Hajnoczi wrote:
> Why introduce a new constant instead of usi
1 - 100 of 103 matches
Mail list logo