Hi,
I think about sharding s3 buckets in CEPH cluster, create bucket-per-XX (256
buckets) or even bucket-per-XXX (4096 buckets) where XXX is sign from object
md5 url.
Could this be the problem? (performance, or some limits)
--
Regards
Dominik
___
Hi,
Is that the only slow request message you see?
No.
Full log: https://www.dropbox.com/s/i3ep5dcimndwvj1/slow_requests.txt.tar.gz
It start from:
2013-08-16 09:43:39.662878 mon.0 10.174.81.132:6788/0 4276384 : [DBG] osd.4
10.174.81.131:6805/31460 reported failed by osd.50
Hi,
Yes, it definitely can as scrubbing takes locks on the PG, which will prevent
reads or writes while the message is being processed (which will involve the
rgw index being scanned).
It is possible to tune scrubbing config for eliminate slow requests and marking
osd down when large rgw
Hi,
Thanks for your response.
It's possible, as deep scrub in particular will add a bit of load (it
goes through and compares the object contents).
It is possible that the scrubbing blocks access(RW or only W) to bucket index
when check .dir... file?
When rgw index is very large I guess it
it a little more light by
changing config?
--
Regards
Dominik
-Original Message-
From: StudziĆski Krzysztof
Sent: Wednesday, July 24, 2013 9:48 AM
To: Gregory Farnum; Yehuda Sadeh
Cc: ceph-de...@vger.kernel.org; ceph-users@lists.ceph.com; Mostowiec Dominik
Subject: RE: [ceph-users] Flapping