[ceph-users] Re: cephfs inode backtrace information

2024-04-02 Thread Loïc Tortay
On 29/03/2024 04:18, Niklas Hambüchen wrote: Hi Loïc, I'm surprised by that high storage amount, my "default" pool uses only ~512 Bytes per file, not ~32 KiB like in your pool. That's a 64x difference! (See also my other response to the original post I just sent.) I'm using Ceph 16.2.1. >

[ceph-users] Re: cephfs inode backtrace information

2024-02-01 Thread Loïc Tortay
On 31/01/2024 20:13, Patrick Donnelly wrote: On Tue, Jan 30, 2024 at 5:03 AM Dietmar Rieder wrote: Hello, I have a question regarding the default pool of a cephfs. According to the docs it is recommended to use a fast ssd replicated pool as default pool for cephfs. I'm asking what are the

[ceph-users] Re: stuck MDS warning: Client HOST failing to respond to cache pressure

2023-10-18 Thread Loïc Tortay
On 18/10/2023 10:02, Frank Schilder wrote: Hi Loïc, thanks for the pointer. Its kind of the opposite extreme to dropping just everything. I need to know the file name that is in cache. I'm looking for a middle way, say, "drop_caches -u USER" that drops all caches of files owned by user USER.

[ceph-users] Re: ceph_leadership_team_meeting_s18e06.mkv

2023-09-08 Thread Loïc Tortay
On 07/09/2023 21:33, Mark Nelson wrote: Hi Rok, We're still try to catch what's causing the memory growth, so it's hard to guess at which releases are affected.  We know it's happening intermittently on a live Pacific cluster at least.  If you have the ability to catch it while it's