" approach seems to be to create a new FS, and
> migrate things over...
> Am I right?
>
> Cheers,
> Oliver
>
>
> Am 05.04.21 um 19:22 schrieb Peter Woodman:
> > hi, i made a tool to do this. it’s rough around the edges and has some
> > known bu
hi, i made a tool to do this. it’s rough around the edges and has some
known bugs with symlinks as parent paths but it checks all file layouts to
see if they match the directory layout they’re in, and if not, makes them
so by copying and replacing. so to ‘migrate’ set your directory layouts and
is the ceph insights plugin enabled? this caused huge huge bloat of the mon
stores for me. before i figured that out, i turned on leveldb compression
options on the mon store and got pretty significant savings, also.
On Tue, Mar 2, 2021 at 6:56 PM Lincoln Bryant wrote:
> Hi list,
>
> We
I've been running ceph on a heterogeneous mix of rock64 and rpi4 SBCs. i've
had to do my own builds, as the upstream ones started off with thunked-out
checksumming due to (afaict) different arm feature sets between upstream's
build targets and my SBCs, but other than that one, haven't run into any
,max_background_jobs=4,max_subcompactions=2
On Sat, Jul 11, 2020 at 12:10 AM Peter Woodman wrote:
> are you running the ceph insights mgr plugin? i was, and my cluster did
> this on rebalance. turned it off, it's fine.
>
> On Fri, Jul 10, 2020 at 5:17 PM Michael Fladischer
> wrote:
>
are you running the ceph insights mgr plugin? i was, and my cluster did
this on rebalance. turned it off, it's fine.
On Fri, Jul 10, 2020 at 5:17 PM Michael Fladischer wrote:
> Hi,
>
> our cluster is on Octopus 15.2.4. We noticed that our MON all ran out of
> space yesterday because the
entries.
Either way, that plugin seems pretty harmful.
On Tue, Feb 18, 2020 at 5:38 PM Peter Woodman wrote:
> Yeah, applied that command.
>
> For some reason, after 3 days of this, the behavior calmed down, and the
> size of the mon store shrank down to ~100MB, where previously it w
Yeah, applied that command.
For some reason, after 3 days of this, the behavior calmed down, and the
size of the mon store shrank down to ~100MB, where previously it was
growing to upwards of 6GB.
On Mon, Feb 17, 2020 at 3:14 AM Dan van der Ster wrote:
> This means it has been applied:
>
> #
Almost forgot, here's a graph of the change in write rate:
https://shortbus.org/x/ceph-mon-io.png
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hey, I've been running a ceph cluster of arm64 SOCs on Luminous for the
past year or so, with no major problems. I recently upgraded to 14.2.7, and
the stability of the cluster immediately suffered. Seemed like any mon
activity was subject to long pauses, and the cluster would hang frequently.
10 matches
Mail list logo