Aaaand another dead end: there is too much meta-data involved (bucket and
object ACLs, lifecycle, policy, …) that won’t be possible to perfectly migrate.
Also, lifecycles _might_ be affected if mtimes change.
So, I’m going to try and go back to a single-cluster multi-zone setup. For that
I’m
True, good luck with that, its kind of a tedious process that takes just
too long time :(
Nino
On Sat, Jun 17, 2023 at 7:48 AM Christian Theune wrote:
> What got lost is that I need to change the pool’s m/k parameters, which is
> only possible by creating a new pool and moving all data from
What got lost is that I need to change the pool’s m/k parameters, which is only
possible by creating a new pool and moving all data from the old pool. Changing
the crush rule doesn’t allow you to do that.
> On 16. Jun 2023, at 23:32, Nino Kotur wrote:
>
> If you create new crush rule for
If you create new crush rule for ssd/nvme/hdd and attach it to existing
pool you should be able to do the migration seamlessly while everything is
online... However impact to user will depend on storage devices load and
network utilization as it will create chaos on cluster network.
Or did i get
Hi,
further note to self and for posterity … ;)
This turned out to be a no-go as well, because you can’t silently switch the
pools to a different storage class: the objects will be found, but the index
still refers to the old storage class and lifecycle migrations won’t work.
I’ve
Following up to myself and for posterity:
I’m going to try to perform a switch here using (temporary) storage classes and
renaming of the pools to ensure that I can quickly change the STANDARD class to
a better EC pool and have new objects located there. After that we’ll add
(temporary)