Sure - you can play with the weights or crush weights to make sure that
all drives fill evenly to their respective capacities. The consequence
of doing this is that about twice as much data will be on the drives
with twice the size obviously. But - a perhaps less glaring consequence
is that t
Is not there a way to Deal with this kind of setup when playing with the
"weight" of an OSD? I dont mean the "crush weight".
I am in a Situation where i had to Think about to add a Server with 24 x 2TB
disks - my other osd nodes has 12 x 4TB. Which is in Sum 48TB per node in both
situations.
You can adjust the Primary Affinity down on the larger drives so they’ll get
less read load. In one test I’ve seen this result in a 10-15% increase in read
throughout but it depends on your situation.
Optimal settings would require calculations that make my head hurt, maybe
someone has a too
Hi Andras,
El 31/3/20 a las 16:42, Andras Pataki escribió:
I'm looking for some advice on what to do about drives of different
sizes in the same cluster.
We have so far kept the drive sizes consistent on our main ceph
cluster (using 8TB drives). We're getting some new hardware with
larger,