[ceph-users] Re: How do you handle large Ceph object storage cluster?

2023-10-19 Thread Peter Grandi
> [...] (>10k OSDs, >60 PB of data). 6TBs on average per OSD? Hopully SSDs or RAID10 (or low-number, 3-5) RAID5. > It is entirely dedicated to object storage with S3 interface. > Maintenance and its extension are getting more and more > problematic and time consuming. Ah the joys of a single

[ceph-users] Re: How do you handle large Ceph object storage cluster?

2023-10-17 Thread Wesley Dillingham
Well you are probably in the top 1% of cluster size. I would guess that trying to cut your existing cluster in half while not encountering any downtime as you shuffle existing buckets between old cluster and new cluster would be harder than redirecting all new buckets (or users) to a second