[ceph-users] Re: Cephfs: Migrating Data to a new Data Pool

2021-04-05 Thread Peter Woodman
" approach seems to be to create a new FS, and > migrate things over... > Am I right? > > Cheers, > Oliver > > > Am 05.04.21 um 19:22 schrieb Peter Woodman: > > hi, i made a tool to do this. it’s rough around the edges and has some > > known bu

[ceph-users] Re: Cephfs: Migrating Data to a new Data Pool

2021-04-05 Thread Peter Woodman
hi, i made a tool to do this. it’s rough around the edges and has some known bugs with symlinks as parent paths but it checks all file layouts to see if they match the directory layout they’re in, and if not, makes them so by copying and replacing. so to ‘migrate’ set your directory layouts and

[ceph-users] Re: Monitor leveldb growing without bound v14.2.16

2021-03-02 Thread Peter Woodman
is the ceph insights plugin enabled? this caused huge huge bloat of the mon stores for me. before i figured that out, i turned on leveldb compression options on the mon store and got pretty significant savings, also. On Tue, Mar 2, 2021 at 6:56 PM Lincoln Bryant wrote: > Hi list, > > We

[ceph-users] Re: Ceph on ARM ?

2020-11-24 Thread Peter Woodman
I've been running ceph on a heterogeneous mix of rock64 and rpi4 SBCs. i've had to do my own builds, as the upstream ones started off with thunked-out checksumming due to (afaict) different arm feature sets between upstream's build targets and my SBCs, but other than that one, haven't run into any

[ceph-users] Re: MON store.db keeps growing with Octopus

2020-07-10 Thread Peter Woodman
,max_background_jobs=4,max_subcompactions=2 On Sat, Jul 11, 2020 at 12:10 AM Peter Woodman wrote: > are you running the ceph insights mgr plugin? i was, and my cluster did > this on rebalance. turned it off, it's fine. > > On Fri, Jul 10, 2020 at 5:17 PM Michael Fladischer > wrote: >

[ceph-users] Re: MON store.db keeps growing with Octopus

2020-07-10 Thread Peter Woodman
are you running the ceph insights mgr plugin? i was, and my cluster did this on rebalance. turned it off, it's fine. On Fri, Jul 10, 2020 at 5:17 PM Michael Fladischer wrote: > Hi, > > our cluster is on Octopus 15.2.4. We noticed that our MON all ran out of > space yesterday because the

[ceph-users] Re: Excessive write load on mons after upgrade from 12.2.13 -> 14.2.7

2020-06-10 Thread Peter Woodman
entries. Either way, that plugin seems pretty harmful. On Tue, Feb 18, 2020 at 5:38 PM Peter Woodman wrote: > Yeah, applied that command. > > For some reason, after 3 days of this, the behavior calmed down, and the > size of the mon store shrank down to ~100MB, where previously it w

[ceph-users] Re: Excessive write load on mons after upgrade from 12.2.13 -> 14.2.7

2020-02-18 Thread Peter Woodman
Yeah, applied that command. For some reason, after 3 days of this, the behavior calmed down, and the size of the mon store shrank down to ~100MB, where previously it was growing to upwards of 6GB. On Mon, Feb 17, 2020 at 3:14 AM Dan van der Ster wrote: > This means it has been applied: > > #

[ceph-users] Re: Excessive write load on mons after upgrade from 12.2.13 -> 14.2.7

2020-02-13 Thread peter woodman
Almost forgot, here's a graph of the change in write rate: https://shortbus.org/x/ceph-mon-io.png ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Excessive write load on mons after upgrade from 12.2.13 -> 14.2.7

2020-02-13 Thread Peter Woodman
Hey, I've been running a ceph cluster of arm64 SOCs on Luminous for the past year or so, with no major problems. I recently upgraded to 14.2.7, and the stability of the cluster immediately suffered. Seemed like any mon activity was subject to long pauses, and the cluster would hang frequently.