I added sharding to our busiest RGW sites, but it will not shard existing 
bucket indexes, only applies to new buckets. Even with that change, I'm still 
considering moving the index pool to SSD. The main factor being the rate of 
writes. We are looking at a project that will have extremely high writes/sec 
through the RGWs. 

The other thing worth noting is that at that scale, you also need to change 
filestore merge threshold and filestore split multiple to something 
considerably larger. Props to Michael Kidd @ RH for that tip. There's a 
mathematical formula on the filestore config reference.

Warren

-----Original Message-----
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Daniel 
Maraio
Sent: Tuesday, September 01, 2015 10:40 AM
To: ceph-users@lists.ceph.com
Subject: [ceph-users] Moving/Sharding RGW Bucket Index

Hello,

   I have two large buckets in my RGW and I think the performance is being 
impacted by the bucket index. One bucket contains 9 million objects and the 
other one has 22 million. I'd like to shard the bucket index and also change 
the ruleset of the .rgw.buckets.index pool to put it on our SSD root. I could 
not find any documentation on this issue. It looks like the bucket indexes can 
be rebuilt using the radosgw-admin bucket check command but I'm not sure how to 
proceed. We can stop writes or take the cluster down completely if necessary. 
My initial thought was to backup the existing index pool and create a new one. 
I'm not sure if I can change the index_pool of an existing bucket. If that is 
possible I assume I can change that to my new pool and execute a radosgw-admin 
bucket check command to rebuild/shard the index.

   Does anyone have experience in getting sharding running with an existing 
bucket, or even moving the index pool to a different ruleset? 
When I change the crush ruleset for the .rgw.buckets.index pool to my SSD root 
we run into issues, buckets cannot be created or listed, writes cease to work, 
reads seem to work fine though. Thanks for your time!

- Daniel
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to