06.11.2019 16:52, Max Reitz wrote:
> On 06.11.19 14:34, Dietmar Maurer wrote:
>>
>>> On 6 November 2019 14:17 Max Reitz wrote:
>>>
>>>
>>> On 06.11.19 14:09, Dietmar Maurer wrote:
> Let me elaborate: Yes, a cluster size generally means that it is most
> “efficient” to access the
On 06.11.19 14:34, Dietmar Maurer wrote:
>
>> On 6 November 2019 14:17 Max Reitz wrote:
>>
>>
>> On 06.11.19 14:09, Dietmar Maurer wrote:
Let me elaborate: Yes, a cluster size generally means that it is most
“efficient” to access the storage at that size. But there’s a tradeoff.
> Let me elaborate: Yes, a cluster size generally means that it is most
> “efficient” to access the storage at that size. But there’s a tradeoff.
> At some point, reading the data takes sufficiently long that reading a
> bit of metadata doesn’t matter anymore (usually, that is).
Any network
> And if it issues a smaller request, there is no way for a guest device
> to tell it “OK, here’s your data, but note we have a whole 4 MB chunk
> around it, maybe you’d like to take that as well...?”
>
> I understand wanting to increase the backup buffer size, but I don’t
> quite understand why
> On 6 November 2019 14:17 Max Reitz wrote:
>
>
> On 06.11.19 14:09, Dietmar Maurer wrote:
> >> Let me elaborate: Yes, a cluster size generally means that it is most
> >> “efficient” to access the storage at that size. But there’s a tradeoff.
> >> At some point, reading the data takes
> The thing is, it just seems unnecessary to me to take the source cluster
> size into account in general. It seems weird that a medium only allows
> 4 MB reads, because, well, guests aren’t going to take that into account.
Maybe it is strange, but it is quite obvious that there is an optimal
On 06.11.19 14:09, Dietmar Maurer wrote:
>> Let me elaborate: Yes, a cluster size generally means that it is most
>> “efficient” to access the storage at that size. But there’s a tradeoff.
>> At some point, reading the data takes sufficiently long that reading a
>> bit of metadata doesn’t matter
On 06.11.19 12:18, Dietmar Maurer wrote:
>> And if it issues a smaller request, there is no way for a guest device
>> to tell it “OK, here’s your data, but note we have a whole 4 MB chunk
>> around it, maybe you’d like to take that as well...?”
>>
>> I understand wanting to increase the backup
On 06.11.19 11:34, Wolfgang Bumiller wrote:
> On Wed, Nov 06, 2019 at 10:37:04AM +0100, Max Reitz wrote:
>> On 06.11.19 09:32, Stefan Hajnoczi wrote:
>>> On Tue, Nov 05, 2019 at 11:02:44AM +0100, Dietmar Maurer wrote:
Example: Backup from ceph disk (rbd_cache=false) to local disk:
On 06.11.19 11:18, Dietmar Maurer wrote:
>> The thing is, it just seems unnecessary to me to take the source cluster
>> size into account in general. It seems weird that a medium only allows
>> 4 MB reads, because, well, guests aren’t going to take that into account.
>
> Maybe it is strange, but
On Wed, Nov 06, 2019 at 10:37:04AM +0100, Max Reitz wrote:
> On 06.11.19 09:32, Stefan Hajnoczi wrote:
> > On Tue, Nov 05, 2019 at 11:02:44AM +0100, Dietmar Maurer wrote:
> >> Example: Backup from ceph disk (rbd_cache=false) to local disk:
> >>
> >> backup_calculate_cluster_size returns 64K
On 06.11.19 09:32, Stefan Hajnoczi wrote:
> On Tue, Nov 05, 2019 at 11:02:44AM +0100, Dietmar Maurer wrote:
>> Example: Backup from ceph disk (rbd_cache=false) to local disk:
>>
>> backup_calculate_cluster_size returns 64K (correct for my local .raw image)
>>
>> Then the backup job starts to read
On Tue, Nov 05, 2019 at 11:02:44AM +0100, Dietmar Maurer wrote:
> Example: Backup from ceph disk (rbd_cache=false) to local disk:
>
> backup_calculate_cluster_size returns 64K (correct for my local .raw image)
>
> Then the backup job starts to read 64K blocks from ceph.
>
> But ceph always
13 matches
Mail list logo