Re: [pve-devel] backup ceph high iops and slow

2014-12-09 Thread VELARTIS Philipp Dürhammer
:-( -Ursprüngliche Nachricht- Von: pve-devel [mailto:pve-devel-boun...@pve.proxmox.com] Im Auftrag von Dmitry Petuhov Gesendet: Freitag, 17. Oktober 2014 08:04 An: pve-devel@pve.proxmox.com Betreff: Re: [pve-devel] backup ceph high iops and slow 16.10.2014 22:33, VELARTIS Philipp

Re: [pve-devel] backup ceph high iops and slow

2014-10-19 Thread Alexandre DERUMIER
18 Octobre 2014 09:47:08 Objet: RE: [pve-devel] backup ceph high iops and slow We read 64K blocks, so Don't for for backup, but drive-mirror have a granulary option to change the block size: # @granularity: #optional granularity of the dirty bitmap, default is 64K. # Must be a power

Re: [pve-devel] backup ceph high iops and slow

2014-10-19 Thread Dietmar Maurer
+RBD supports read-ahead/prefetching to optimize small, sequential reads. +This should normally be handled by the guest OS in the case of a VM, How should we do read-ahead inside qemu? manually? ___ pve-devel mailing list pve-devel@pve.proxmox.com

Re: [pve-devel] backup ceph high iops and slow

2014-10-19 Thread Alexandre DERUMIER
, VELARTIS Philipp Dürhammer p.duerham...@velartis.at, Dmitry Petuhov mityapetu...@gmail.com Envoyé: Dimanche 19 Octobre 2014 18:07:30 Objet: RE: [pve-devel] backup ceph high iops and slow +RBD supports read-ahead/prefetching to optimize small, sequential reads. +This should normally

Re: [pve-devel] backup ceph high iops and slow

2014-10-18 Thread Dietmar Maurer
We read 64K blocks, so Don't for for backup, but drive-mirror have a granulary option to change the block size: # @granularity: #optional granularity of the dirty bitmap, default is 64K. # Must be a power of 2 between 512 and 64M. Although it would be much easier it

Re: [pve-devel] backup ceph high iops and slow

2014-10-17 Thread Dmitry Petuhov
16.10.2014 22:33, VELARTIS Philipp Dürhammer пишет: Why do backups with ceph make so high iops? I get around 600 iops for 40mb/sec which is by the way very slow for a backup. When I make a disk clone from local to ceph I get 120mb/sec (which is the network limit from the old proxmox nodes) and

Re: [pve-devel] backup ceph high iops and slow

2014-10-17 Thread Alexandre DERUMIER
. - Mail original - De: Dmitry Petuhov mityapetu...@gmail.com À: pve-devel@pve.proxmox.com Envoyé: Vendredi 17 Octobre 2014 08:04:04 Objet: Re: [pve-devel] backup ceph high iops and slow 16.10.2014 22:33, VELARTIS Philipp Dürhammer пишет: Why do backups with ceph make so high iops? I get

Re: [pve-devel] backup ceph high iops and slow

2014-10-17 Thread Dmitry Petuhov
] backup ceph high iops and slow 16.10.2014 22:33, VELARTIS Philipp Dürhammer пишет: Why do backups with ceph make so high iops? I get around 600 iops for 40mb/sec which is by the way very slow for a backup. When I make a disk clone from local to ceph I get 120mb/sec (which is the network limit from

Re: [pve-devel] backup ceph high iops and slow

2014-10-17 Thread VELARTIS Philipp Dürhammer
Auftrag von Dmitry Petuhov Gesendet: Freitag, 17. Oktober 2014 09:25 An: Alexandre DERUMIER Cc: pve-devel@pve.proxmox.com Betreff: Re: [pve-devel] backup ceph high iops and slow I think that skipping free space isn't main issue: backup of used space is even slower. 17.10.2014 10:31, Alexandre

Re: [pve-devel] backup ceph high iops and slow

2014-10-17 Thread Alexandre DERUMIER
diet...@proxmox.com À: VELARTIS Philipp Dürhammer p.duerham...@velartis.at, Dmitry Petuhov mityapetu...@gmail.com, Alexandre DERUMIER aderum...@odiso.com Cc: pve-devel@pve.proxmox.com Envoyé: Vendredi 17 Octobre 2014 13:02:29 Objet: RE: [pve-devel] backup ceph high iops and slow I agree. The main

[pve-devel] backup ceph high iops and slow

2014-10-16 Thread VELARTIS Philipp Dürhammer
Why do backups with ceph make so high iops? I get around 600 iops for 40mb/sec which is by the way very slow for a backup. When I make a disk clone from local to ceph I get 120mb/sec (which is the network limit from the old proxmox nodes) and only around 100-120 iops which is the normal for a