[ceph-users] bucket count limit

2013-08-22 Thread Mostowiec Dominik
Hi, I think about sharding s3 buckets in CEPH cluster, create bucket-per-XX (256 buckets) or even bucket-per-XXX (4096 buckets) where XXX is sign from object md5 url. Could this be the problem? (performance, or some limits) -- Regards Dominik ___

Re: [ceph-users] large memory leak on scrubbing

2013-08-19 Thread Mostowiec Dominik
Hi, Is that the only slow request message you see? No. Full log: https://www.dropbox.com/s/i3ep5dcimndwvj1/slow_requests.txt.tar.gz It start from: 2013-08-16 09:43:39.662878 mon.0 10.174.81.132:6788/0 4276384 : [DBG] osd.4 10.174.81.131:6805/31460 reported failed by osd.50

Re: [ceph-users] Flapping osd / continuously reported as failed

2013-08-19 Thread Mostowiec Dominik
Hi, Yes, it definitely can as scrubbing takes locks on the PG, which will prevent reads or writes while the message is being processed (which will involve the rgw index being scanned). It is possible to tune scrubbing config for eliminate slow requests and marking osd down when large rgw

Re: [ceph-users] Flapping osd / continuously reported as failed

2013-08-16 Thread Mostowiec Dominik
Hi, Thanks for your response. It's possible, as deep scrub in particular will add a bit of load (it goes through and compares the object contents). It is possible that the scrubbing blocks access(RW or only W) to bucket index when check .dir... file? When rgw index is very large I guess it

Re: [ceph-users] Flapping osd / continuously reported as failed

2013-07-25 Thread Mostowiec Dominik
it a little more light by changing config? -- Regards Dominik -Original Message- From: StudziƄski Krzysztof Sent: Wednesday, July 24, 2013 9:48 AM To: Gregory Farnum; Yehuda Sadeh Cc: ceph-de...@vger.kernel.org; ceph-users@lists.ceph.com; Mostowiec Dominik Subject: RE: [ceph-users] Flapping