My 0.02, there are two kinds of balance, one for space utilization , another
for performance.
Now seems you will be good for the space utilization, but you might suffer a
bit for the performance as the density of disk increase.The new rack will hold
1/3 data by 1/5 disks, if we assume the
Thank you Greg, much appreciated.
I'll test with the crush tool to see if it complains about this new layout.
George
On Mon, Feb 22, 2016 at 3:19 PM, Gregory Farnum wrote:
> On Mon, Feb 22, 2016 at 9:29 AM, George Mihaiescu
> wrote:
> > Hi,
> >
> >
On Mon, Feb 22, 2016 at 9:29 AM, George Mihaiescu wrote:
> Hi,
>
> We have a fairly large Ceph cluster (3.2 PB) that we want to expand and we
> would like to get your input on this.
>
> The current cluster has around 700 OSDs (4 TB and 6 TB) in three racks with
> the largest
Hi,
We have a fairly large Ceph cluster (3.2 PB) that we want to expand and we
would like to get your input on this.
The current cluster has around 700 OSDs (4 TB and 6 TB) in three racks with
the largest pool being rgw and using a replica 3.
For non-technical reasons (budgetary, etc) we are