Hi Maged,

I also noticed that the write bandwidth is about 114MBps, which cloud be 
limited by 1G network. But why did the same hardware get better performance 
mark when running Luminous or even Jewel? I ran the test at one server in this 
cluster, so I assume that about 30% write requests (I have 3 nodes) will be 
handled by the node itself, which has much higher bandwidth for internal 
network loop. So it write performance could be higher than 114M.
Maybe it’s related to the implementation rados bench itself? 

Br,
Xu Yun

> 在 2019年9月23日,下午6:30,Maged Mokhtar <mmokh...@petasan.org> 写道:
> 
> 
> On 23/09/2019 08:27, 徐蕴 wrote:
>> Hi ceph experts,
>> 
>> I deployed Nautilus (v14.2.4) and Luminous (v12.2.11) on the same hardware, 
>> and made a rough performance comparison. The result seems Luminous is much 
>> better, which is unexpected.
>> 
>> 
>> My setup:
>> 3 servers, each has 3 HDD OSDs, 1 SSD as DB, two separated 1G network for 
>> cluster and public.
>> Pool test has 32 pg and pop numbers, replicated size is 3.
>> Using "rados -p bench 80 write” to measure write performance.
>> The result:
>> Luminous: Average IOPS 36
>> Nautilus:   Average IOPS 28
>> 
>> Is the difference considered valid for Nautilus?
>> 
>> Br,
>> Xu Yun
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io
> 
> If you ran "rados -p bench 80 write”without specifying the block size -b 
> option, then you will be using default 4MB block sizes, at such sizes you 
> should be looking at Throughput MB/s rather than iops, the 28 iops x 4M will 
> already saturate your 1G network.
> 
> /Maged
> 
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to