? The autotrim didn't do much thing. Also osd, pg scrub/deep scrub either.
Thank you.
-Original Message-
From: Frank Schilder [mailto:fr...@dtu.dk]
Sent: Tuesday, August 31, 2021 9:27 PM
To: Dan van der Ster
Cc: Patrick Donnelly ; ceph-users
Subject: [ceph-users] Re: LARGE_OMAP_OBJECTS: any proper
Hi Dan,
unfortunately, the file/directory names were generated like one would do for
temporary files. No clue about their location. I would need to find such a file
while it exists. Of course, I could execute a find on the snapshot ...
Just kidding. The large omap count is going down already,
Dear Dan and Patrick,
the find didn't return anything. With this and the info below, am I right to
assume that these were temporary working directories that got caught in a
snapshot (we use rolling snapshots)?
I would really appreciate any ideas on how to find out the original file system
Hi,
I don't know how to find a full path from a dir object.
But perhaps you can make an educated guess based on what you see in:
rados listomapkeys --pool=con-fs2-meta1 1000eec35f5.0100 | head -n 100
Those should be the directory entries. (s/_head//)
-- Dan
On Tue, Aug 31, 2021 at 2:31 PM
Dear Dan and Patrick,
I have the suspicion that I'm looking at large directories in the snapshots
that do no longer exist any more on the file system. Hence, the omap objects
are not fragmented as explained in the tracker issue. Here is the info as you
asked me to pull out:
> find /cephfs
Hi Frank,
On Wed, Aug 25, 2021 at 6:27 AM Frank Schilder wrote:
>
> Hi all,
>
> I have the notorious "LARGE_OMAP_OBJECTS: 4 large omap objects" warning and
> am again wondering if there is any proper action one can take except "wait it
> out and deep-scrub (numerous ceph-users threads)" or
Hi Dan,
he he, I built a large omap object cluster, we are up to 5 now :)
It is possible that our meta-data pool became a bottleneck. I'm re-deploying
OSDs on these disks at the moment, increasing the OSD count from 1 to 4. The
disks I use require high concurrency access to get close to spec
On Thu, Aug 26, 2021 at 9:49 AM Frank Schilder wrote:
>
> Hi Dan,
>
> he he, I built a large omap object cluster, we are up to 5 now :)
>
> It is possible that our meta-data pool became a bottleneck. I'm re-deploying
> OSDs on these disks at the moment, increasing the OSD count from 1 to 4. The
Hi Dan,
> [...] Do you have some custom mds config in this area?
none that I'm aware of. What MDS config parameters should I look for?
I recently seem to have had problems with very slow dirfrag operations that
made an MDS unresponsive long enough for a MON to kick it out. I had to
increase
Hi Dan,
thanks for looking at this. Here are the lines from health detail and ceph.log:
[root@gnosis ~]# ceph health detail
HEALTH_WARN 4 large omap objects
LARGE_OMAP_OBJECTS 4 large omap objects
4 large objects found in pool 'con-fs2-meta1'
Search the cluster log for 'Large omap object
Hi,
On Wed, Aug 25, 2021 at 2:37 PM Frank Schilder wrote:
>
> Hi Dan,
>
> > [...] Do you have some custom mds config in this area?
>
> none that I'm aware of. What MDS config parameters should I look for?
This covers the topic and relevant config:
Those are probably large directories; each omap key is a file/subdir
in the directory.
Normally the mds fragments dirs across several objects, so you
shouldn't have a huge number of omap entries in any one single object.
Do you have some custom mds config in this area?
-- dan
On Wed, Aug 25,
Hi Frank,
Which objects are large? (You should see this in ceph.log when the
large obj was detected).
-- dan
On Wed, Aug 25, 2021 at 12:27 PM Frank Schilder wrote:
>
> Hi all,
>
> I have the notorious "LARGE_OMAP_OBJECTS: 4 large omap objects" warning and
> am again wondering if there is any
13 matches
Mail list logo