Just an update for anyone that sees this it looks like Veeam doesn't index it's 
content real well and as such when it offloads it, it is random IO which means 
that the IOPS and throughput is not great and you really need to overbuild your 
volumes (RAID) on your Veeam server to get any kind of performance out of it. 
So on a 4disk r10 you get about 30M/s when offloading.



-----Original Message-----
From: Konstantin Shalygin <k0...@k0ste.ru> 
Sent: Saturday, July 10, 2021 10:28 AM
To: Nathan Fish <lordci...@gmail.com>
Cc: Drew Weaver <drew.wea...@thenap.com>; ceph-users@ceph.io
Subject: Re: [ceph-users] Re: RGW performance as a Veeam capacity tier

Veeam normally produced 2-4Gbit/s to S3 in our case


k

Sent from my iPhone

> On 10 Jul 2021, at 08:36, Nathan Fish <lordci...@gmail.com> wrote:
> 
> No, that's pretty slow, you should get at least 10x that for 
> sequential writes. Sounds like Veeam is doing a lot of sync random 
> writes. If you are able to add a bit of SSD (preferably NVMe) for 
> journaling, that can help random IO a lot. Alternatively, look into IO 
> settings for Veeam.
> 
> For reference, we have ~100 drives with size=3, and get ~3 GiB/s 
> sequential with the right benchmark tuning.
> 
>> On Fri, Jul 9, 2021 at 1:59 PM Drew Weaver <drew.wea...@thenap.com> wrote:
>> 
>> Greetings.
>> 
>> I've begun testing using Ceph 14.2.9 as a capacity tier for a scale out 
>> backup repository in Veeam 11.
>> 
>> The backup host and the RGW server are connected directly at 10Gbps.
>> 
>> It would appear that the maximum throughput that Veeam is able to achieve 
>> while archiving data to this cluster is about 24MB/sec.
>> 
>> client:   156 KiB/s rd, 24 MiB/s wr, 156 op/s rd, 385 op/s wr
>> 
>> The cluster has 6 OSD hosts with a total of 48 4TB SATA drives.
>> 
>> Does that performance sound about right for 48 4TB SATA drives /w 10G 
>> networking?
>> 
>> Thanks,
>> -Drew
>> 
>> 
>> 
>> 
>> 
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an 
>> email to ceph-users-le...@ceph.io
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an 
> email to ceph-users-le...@ceph.io

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to