Have a look at the iostsat -x 1 1000 output to see what the drives are doing

On Wed, Dec 23, 2015 at 4:35 PM Florian Rommel <
[email protected]> wrote:

> Ah, totally forgot the additional details :)
>
> OS is SUSE Enterprise Linux 12.0 with all patches,
> Ceph version 0.94.3
> 4 node cluster with 2x 10GBe networking, one for cluster and one for
> public network, 1 additional server purely as an admin server.
> Test machine is also 10gbe connected
>
> ceph.conf is included:
> [global]
> fsid = 312e0996-a13c-46d3-abe3-903e0b4a589a
> mon_initial_members = ceph-admin, ceph-01, ceph-02, ceph-03, ceph-04
> mon_host =
> 192.168.0.190,192.168.0.191,192.168.0.192,192.168.0.193,192.168.0.194
> auth_cluster_required = cephx
> auth_service_required = cephx
> auth_client_required = cephx
> filestore_xattr_use_omap = true
> public network = 192.168.0.0/24
> cluster network = 192.168.10.0/24
>
> osd pool default size = 2
> [osd]
> osd journal size = 2048
>
> Thanks again for any help and merry xmas already .
> //F
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to