is almost zero. And those points where the
write io drops were times when I dopped the mds caches.
---
Olli Rajala - Lead TD
Anima Vitae Ltd.
www.anima.fi
---
On Wed, Jul 3, 2024 at 7:49 PM Venky Shankar wrote:
> Hi Olli,
>
> On Tue, Jul 2,
d
"mlocate" packages. The default config (on Ubuntu atleast) of updatedb for
"mlocate" does skip scanning cephfs filesystems but not so for "locate"
which happily ventures onto all of your cephfs mounts :|
---
O
0 70158 86 TiB 234691728 117 TiB 0 B
0 B
Is there some way to force these to get trimmed?
tnx,
---
Olli Rajala - Lead TD
Anima Vitae Ltd.
www.anima.fi
---
On Fri, May 17, 2024 at 6:48 AM Gregory Farnum wrote:
> It's unfortu
the objects in the pool and delete all objects without
the tag and older than one year
Is there any tooling to do such an operation? Any risks or flawed logic
there?
...or any other ways to discover and get rid of these objects?
Cheers!
---
Olli Rajala - Lead TD
Anima Vitae
to
run cephfs-data-scan scan_extents and scan_inodes while the fs is online?
Does it help if I give a custom tag while forward scrubbing and then
use --filter-tag on the backward scans?
...or is there some other way to check and cleanup orphans?
tnx,
---
Olli Rajala - Lead
,
---
Olli Rajala - Lead TD
Anima Vitae Ltd.
www.anima.fi
---
On Sun, Dec 11, 2022 at 9:07 PM Olli Rajala wrote:
> Hi,
>
> I'm still totally lost with this issue. And now lately I've had a couple
> of incidents where the write bw has suddenly jumped to even cr
. Is there any tool or procedure to
safely check or rebuild the mds data? ...if this behaviour could be caused
by some hidden issue with the data itself.
Tnx,
---
Olli Rajala - Lead TD
Anima Vitae Ltd.
www.anima.fi
---
On Fri, Nov 11, 2022 at 9:14 AM Venky
,
---
Olli Rajala - Lead TD
Anima Vitae Ltd.
www.anima.fi
---
On Thu, Nov 10, 2022 at 8:18 AM Venky Shankar wrote:
> Hi Olli,
>
> On Mon, Oct 17, 2022 at 1:08 PM Olli Rajala wrote:
> >
> > Hi Patrick,
> >
> > With
: 30f9b38b-a62c-44bb-9e00-53edf483a415
Tnx!
---
Olli Rajala - Lead TD
Anima Vitae Ltd.
www.anima.fi
---
On Mon, Nov 7, 2022 at 2:30 PM Milind Changire wrote:
> maybe,
>
>- use the top program to look at a threaded listing of the ceph-mds
>
the cache would
show any bw increase by running "tree" at the root of one of the mounts and
it didn't affect anything at the time. So basically the cache has been
fully saturated all this time now.
Boggled,
---
Olli Rajala - Lead TD
Anima Vitae Ltd.
ww
when I did Octopus->Pacific upgrade...
Cheers,
---
Olli Rajala - Lead TD
Anima Vitae Ltd.
www.anima.fi
---
On Mon, Oct 24, 2022 at 9:36 PM Olli Rajala wrote:
> I tried my luck and upgraded to 17.2.4 but unfortunately that didn't make
or mechanism could cause such high idle write
io? I've tried to fiddle a bit with some of the mds cache trim and memory
settings but I haven't noticed any effect there. Any pointers appreciated.
Cheers,
---
Olli Rajala - Lead TD
Anima Vitae Ltd.
www.anima.fi
ue what to focus on and how to interpret that.
Here's a perf dump if you or anyone could make something out of that:
https://gist.github.com/olliRJL/43c10173aafd82be22c080a9cd28e673
Tnx!
o.
---
Olli Rajala - Lead TD
Anima Vitae Ltd.
www.anima.fi
---
On Fri,
Hi,
I'm seeing constant 25-50MB/s writes to the metadata pool even when all
clients and the cluster is idling and in clean state. This surely can't be
normal?
There's no apparent issues with the performance of the cluster but this
write rate seems excessive and I don't know where to look for the
14 matches
Mail list logo