Ok, seems like my problem could be cephfs-related. I have 16 cephfs
clients that do heavy, sub-optimal writes simultaneously. The cluster
have no problems handling the load up until circa 20000 kobjects.
 Above this threshold the OSDs start to go down randomly and eventually
get killed by the ceph's watchdog mechanism. The funny thing is that
CPU and HDDs are not really overloaded during this events. So I am
really puzzled at this moment.
-Mykola
-----Original Message-----
From: Sven Höper <l...@mno.pw>
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] rados complexity
Date: Sun, 05 Jun 2016 19:18:27 +0200
We've got a simple cluster having 45 OSDs, have above 50000 kobjects
and did not
have any issues so far. Our cluster does mainly serve some rados pools
for an
application which usually writes data once and reads it multiple times.
- Sven
Am Sonntag, den 05.06.2016, 18:47 +0200 schrieb Mykola Dvornik:
> Are there any ceph users with pools containing >20000 kobjects?
> 
> If so, have you noticed any instabilities of the clusters once this
> threshold
> is reached?
> 
> -Mykola
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to