Add this under osd. 

osd op threads = 8 

Restart the osd services and try that. 






From: "Florian Rommel" <[email protected]> 
To: "Wade Holler" <[email protected]> 
Cc: [email protected] 
Sent: Saturday, December 26, 2015 4:55:06 AM 
Subject: Re: [ceph-users] more performance issues :( 

Hi, iostat shows all OSDs working when data is benched. it looks like the 
culprit is nowhere to be found. If i add SSD journals with the SSDs that we 
have, even thought they give a much higher result with fio than the SATA 
drives, the speed of the cluster is exactly the same… 150-180MB/s, while reads 
max out the 10GBe network with no problem. 
rbd benchwrite however gives me NICE throughput… about 500MB /s to start with 
and then dropping and flattening out at 320MB/s, 90000 IOPs…. so what the hell 
is going on?. 


if i take the journals off and move them to the disks themselves, same results. 
Something is really really off with my config i guess, and I need to do some 
serious troubleshooting to figure this out. 

Thanks for the help so far . 
//Florian 






On 24 Dec 2015, at 13:54, Wade Holler < [email protected] > wrote: 

Have a look at the iostsat -x 1 1000 output to see what the drives are doing 

On Wed, Dec 23, 2015 at 4:35 PM Florian Rommel < [email protected] 
> wrote: 

BQ_BEGIN
Ah, totally forgot the additional details :) 

OS is SUSE Enterprise Linux 12.0 with all patches, 
Ceph version 0.94.3 
4 node cluster with 2x 10GBe networking, one for cluster and one for public 
network, 1 additional server purely as an admin server. 
Test machine is also 10gbe connected 

ceph.conf is included: 
[global] 
fsid = 312e0996-a13c-46d3-abe3-903e0b4a589a 
mon_initial_members = ceph-admin, ceph-01, ceph-02, ceph-03, ceph-04 
mon_host = 
192.168.0.190,192.168.0.191,192.168.0.192,192.168.0.193,192.168.0.194 
auth_cluster_required = cephx 
auth_service_required = cephx 
auth_client_required = cephx 
filestore_xattr_use_omap = true 
public network = 192.168.0.0/24 
cluster network = 192.168.10.0/24 

osd pool default size = 2 
[osd] 
osd journal size = 2048 

Thanks again for any help and merry xmas already . 
//F 
_______________________________________________ 
ceph-users mailing list 
[email protected] 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 




BQ_END



_______________________________________________ 
ceph-users mailing list 
[email protected] 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to