Not exactly, but, we are seeing some drop with 256K compare to 64K. This is 
with random reads though in Ubuntu. We had to bump up read_ahead_kb from 
default 128KB to 512KB to work around that.
But, in RHEL we saw all sorts of issues with read_ahead_kb for small block 
random reads and I think it is already default to 4MB or so..If so, try to 
reduce it to 512KB and see..
Generally, for sequential reads, you need to play with read_ahead_kb to achieve 
better performance. Ceph performance in general (without read_ahead_kb) will be 
lower specially in all flash as the requests will be serialized within a PG.
Our test is with all flash though and take my comments with a grain of salt in 
case of ceph + HDD..

Thanks & Regards
Somnath


From: EP Komarla [mailto:ep.koma...@flextronics.com]
Sent: Tuesday, July 26, 2016 4:50 PM
To: Somnath Roy; ceph-users@lists.ceph.com
Subject: RE: Ceph performance pattern

Thanks Somnath.

I am running with CentOS7.2.  Have you seen this pattern before?

- epk

From: Somnath Roy [mailto:somnath....@sandisk.com]
Sent: Tuesday, July 26, 2016 4:44 PM
To: EP Komarla <ep.koma...@flextronics.com<mailto:ep.koma...@flextronics.com>>; 
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
Subject: RE: Ceph performance pattern

Which OS/kernel you are running with ?
Try setting bigger read_ahead_kb for sequential runs.

Thanks & Regards
Somnath

From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of EP 
Komarla
Sent: Tuesday, July 26, 2016 4:38 PM
To: ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
Subject: [ceph-users] Ceph performance pattern

Hi,

I am showing below fio results for Sequential Read on my Ceph cluster.  I am 
trying to understand this pattern:

- why there is a dip in the performance for block sizes 32k-256k?
- is this an expected performance graph?
- have you seen this kind of pattern before

[cid:image001.png@01D1E75E.5A5D48A0]

My cluster details:
Ceph: Hammer release
Cluster: 6 nodes (dual Intel sockets) each with 20 OSDs and 4 SSDs (5 OSD 
journals on one SSD)
Client network: 10Gbps
Cluster network: 10Gbps
FIO test:
- 2 Client servers
- Sequential Read
- Run time of 600 seconds
- Filesize = 1TB
- 10 rbd images per client
- Queue depth=16

Any ideas on tuning this cluster?  Where should I look first?

Thanks,

- epk


Legal Disclaimer:
The information contained in this message may be privileged and confidential. 
It is intended to be read only by the individual or entity to whom it is 
addressed or by their designee. If the reader of this message is not the 
intended recipient, you are on notice that any distribution of this message, in 
any form, is strictly prohibited. If you have received this message in error, 
please immediately notify the sender and delete or destroy any copy of this 
message!
PLEASE NOTE: The information contained in this electronic mail message is 
intended only for the use of the designated recipient(s) named above. If the 
reader of this message is not the intended recipient, you are hereby notified 
that you have received this message in error and that any review, 
dissemination, distribution, or copying of this message is strictly prohibited. 
If you have received this communication in error, please notify the sender by 
telephone or e-mail (as shown above) immediately and destroy any and all copies 
of this message in your possession (whether hard copies or electronically 
stored copies).

Legal Disclaimer:
The information contained in this message may be privileged and confidential. 
It is intended to be read only by the individual or entity to whom it is 
addressed or by their designee. If the reader of this message is not the 
intended recipient, you are on notice that any distribution of this message, in 
any form, is strictly prohibited. If you have received this message in error, 
please immediately notify the sender and delete or destroy any copy of this 
message!
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to