4MiB), though that's probably been expired by
> this point so you'll need to get another dump which contains a large
> one. I haven't looked at these data structures using these tools in a
> while so I'll leave more detail up to Xiubo.
> -Greg
>
> On Fri, Dec
3+",
"osd_ops": [
"write 1236893~510649 [fadvise_dontneed] in=510649b"
]
},
{
"tid": 9532808,
"pg": "3.abba2e66",
"osd": 2,
"object_id": "
now
> exactly what metric you're using to say something's 320KB in size. Can
> you explain more?
>
> It might help if you dump the objecter_requests from the MDS and share
> those — it'll display what objects are being written to with what
> sizes.
> -Greg
>
&g
Hi All,
We have been observing that if we let our MDS run for some time, the
bandwidth usage of the disks in the metadata pool starts increasing
significantly (whilst IOPS is about constant), even though the number of
clients, the workloads or anything else doesn't change.
However, after restarti
Dear Ceph Users,
We are experiencing a strange behaviour on Ceph v15.2.9 that a set of PGs
seem to be stuck in active + clean + snaptrim state. (for almost a day now)
Usually snaptrim is quite fast (done in a few minutes), however now in the
osd logs we see slowly increasing trimq numbers, with s
Hi,
We experienced a strange issue with a CephFS snapshot becoming partially
unreadable.
The snapshot was created about 2 months ago and we started a read operation
from it. For a while everything was working fine with all directories
accessible, however after some point clients (FUSE, v15.2.9) s